e-Waste captured by a Nikon D40 in 2017 | Source (edited by the Author)
An Open Letter to Cory Doctorow: Ollama is part of the enshittification!
To start, like a lot of people in this space, I'm a big fan of Cory Doctorow's work. He goes up to bat for initiatives I firmly believe in, such as Creative Commons. He coined the term enshittification, and I've used it often in my own writings.
Which is why I, like many others, was surprised to see him dedicate a rather large portion of his 6th year celebratory post to performing apologetics for generative AI. He laments that anybody who is wholly against genAI is participating in "purity culture" "of the neoliberal ideology".
Is this not the same neoliberalism that, you know, gave way to OpenAI et al. receiving billions of dollars in funding despite being wholly unprofitable? This political ideology which is defined as prioritizing rapid expansion, deregulation, and free-market capitalism. Where the fuck is puritanism anywhere in there?
...a spell checker?
But I don't want to get into the weeds of politics or ethics here, there are far more intelligent people to do that. What I want to do is get into the weeds of his bizarre use case that he's defending in the first place.
Cory declares his use-case for LLMs saying he uses an offline model to "run the text [of his writing] through Ollama as a typo-catcher."
...why? I don't see how an offline LLM would be better at checking for spelling errors than, you know, spell check? He says that "catching these typos at the start of the process is a huge time-saver" which is why this is his preference, but this is a solved problem. Every text editor, ranging from OpenOffice to Sublime Text to NeoVim has that built-in or available as a plug-in.
There's something much more important about his choice here, though. He says technology should be liberated through "free and open versions". But Cory, Ollama is part of enshittification. It's built on llama.cpp. There are license violations where Ollama's releases didn't include required MIT license copyright notices for over a year, and the absence of any mention of llama.cpp in Ollama's README despite heavy dependence on it.
Beyond attribution issues, Ollama forked the ggml inference engine without upstream coordination, resulting in implementations incompatible with standard GGUF files and 20-30% slower than running llama.cpp directly, while also introducing proprietary model packaging that fragmented the ecosystem.
Ollama built a venture-backed commercial product on llama.cpp's foundation while not actively contributing improvements back. One developer noted their llama.cpp PR was merged in under an hour while their Ollama PR sat unreviewed for over a month.
So my question is, why on Earth is the guy that champions mindfulness and resistance towards the corporate degradation of quality completely uncritical of the genAI he uses?
Another argument Cory uses is that AI scrapes the Internet no different than search engines do, that "it's not "unethical" to scrape the web in order to create and analyze data-sets." Search engines provide people access to original creations, whereas generative AI takes this data into a black box for further development, sometimes with no credit and certainly rarely without permission.
Cory is so determined on championing the utility of genAI that he reverse-engineered his arguments, rather defensively might I add. He went out of his way to finger-wag people for their cognitive dissonance regarding their use of other inventions by bad people.
Acknowledging Hypocrisy
I am no stranger to the frustration of dealing with people's cognitive dissonance. To bring up an elephant in the room for an example, there are many people that are passionately and militantly against genAI use, but still participate in meat-eating and support the meat industrial complex. Being plant-based means facing people who will come up with any number of excuses to shrug off their meat consumption. Devine Lu Linvega has a great page on this on their wiki.
But in reality, any argument you can make against use of generative artificial intelligence you can make moreso against the eating of meat, or any other form of animal exploitation. There are the environmental harms, such as livestock production accounting for 14-18% of global greenhouse gas emissions, and cattle responsible for about two-thirds of that total. Nearly 80% of agricultural land is devoted to livestock, and meat production is responsible for 75% of tropical deforestation.
Those who love their cats and dogs shrug off the suffering of these other emotionally intelligent mammals. Cows have best friends and enjoy music. Pigs outperform dogs on cognitive tests. Chickens enjoy cuddles. The propaganda of the (actual) neoliberal lobbyists maintain the status quo of this violence we've become numb and desensitized to. And I believe this desentiziation is what has allowed ongoing atrocities and genocides to occur without, frankly, enough resistance.
So, yes Cory, I am more than familiar with the cognitive dissonance of people I come across on a daily basis. I also can sympathize with the fatigue and futility of trying to change people's minds. But surely that does not mean giving up, surely that does not mean a surrender to said evils. We do not throw our hands up and say well, if you're against generative AI, in order to be morally consistent you must also be against monoclonal antibodies and silicon transistors and satellite technology.
We must be mindful and critical of our shared histories, and the ways in which they continue to shape our present and future. And furthermore, we must continue to be critical of what we support with our consumption and platforms. Whether implicit or explicit.
Cory's argument hedges uncomfortably close to those who have been against critical race theory—saying we need to get over the violence of what came before.
Conclusion
Let me spell it out. People are not against generative AI because of problematic creators. They are against it for the current, ongoing, and progressively-worse problems it harbours. While we ought to be aware and critical of racist, bigoted inventors and noteworthy people of the past, their current inventions aren't actively dumbing people down or causing psychosis or death by suicide or poisoning water supplies in rural communities across the United States.
A 2025 MIT Media Lab study found using ChatGPT to help write essays led to reliably lower levels of neural engagement. Brain activity declines the more you become comfortable relying on AI.
In eastern Oregon, Amazon is using water already contaminated with agricultural fertilizer runoff to cool its data centres. When that contaminated water hits Amazon's sizzling equipment, it evaporates while nitrate pollution stays behind, concentrating to dangerous levels. Data centers consume up to 5 million gallons of water per day, and the discharged wastewater contains biocides, corrosion inhibitors, heavy metals like zinc, copper, and chromium.
Look, it's a cold take to be against purity politics. It's bad. It creates a culture of "criminalizing, demonizing, or dismissing reasonable disagreements". I am a huge believer in the big tent, restorative justice, and meeting people where they are. I try my best to steelman the arguments I disagree with and practice the principle of charity.
I'm tired about writing about genAI. If you want to use an offline LLM on your laptop, nobody will stop you. Why perform rhetorical gymnastics to justify it by accusing critics of purity culture while simultaneously violating your own stated principles about open technology?
Nobody is "impure" for using AI, rather, the technology is causing measurable, accelerating damage to ecosystems and cognition.
You can say "Yes, there are serious harms. I've weighed them against my use case and made a choice." That's fine, and honest. Instead Cory wrote a defense invoking technological liberation while using a tool that exemplifies enshittification, dismissing legitimate concerns about ongoing environmental and cognitive damage as mere purity politics.
We can do better than this. The work Cory has been doing for over half a decade has taught many of us how to do better. That's why this particular argument is so disappointing.
Comments
To comment, please sign in with your website:
Signed in as: