The Piss Average Problem
The fundamental question facing online spaces in 2025 is no longer can AI pass as human? but rather can humans prove they’re not AI?
This represents a profound shift from technical doubt to existential uncertainty. It’s a crisis of faith where the bedrock assumption that we interact with other humans online has collapsed. And I’m not being hyperbolic. In 2024, bot traffic exceeded human traffic for the first time in a decade, hitting 51%. We’ve crossed the threshold. The internet is now majority non-human.
When I personally veer onto the Internet, particularly places like LinkedIn or Substack or any social media’s comment section, Dead Internet Theory truly shines as a valid hypothesis. This once-fringe conspiracy theory which speculates that the Internet is now mostly bots talking to bots is now many people’s lived experience.
It is effortless to see people post broetry specifically written by AI. But if you ask the question of to what end? The entire house of cards falls apart.
People are crafting posts and newsletters and copy and all sorts of writing that they didn’t create and that they didn’t even try to make look human. They do this to generate views, perhaps for their product or service or brand, and yet those views are most likely also just as artificial and hollow and profane.
54% of LinkedIn’s long-form posts are now AI-generated, representing a 189% increase since ChatGPT launched. On Reddit, AI content increased 146% from 2021 to 2024, with some subreddits like creative writing communities hitting 41% artifical content.
This is the existential problem. People are not engaging in good faith. They expect to be able to take shortcuts, justifying it because “everybody else does,” and yet simultaneously expecting genuine reciprocation in response. People want authentic engagement from an audience they’ve never authentically engaged with themselves.
Marshall McLuhan warned us decades ago: “the medium is the message.” When the medium becomes automated content mills, the message becomes nothing here matters.
Perhaps it is even worse that people waste their time and energy making apologetics for this means of existence and creation—trying to justify the use of artificial means instead of doing the hard work that good output truly requires. As writer Ted Chiang argued in The New Yorker, ChatGPT is essentially “a blurry JPEG of the web,” A lossy compression of human knowledge that produces plausible-sounding but ultimately hollow simulacra.
If you take a look at images created by ChatGPT recently, they almost always have a yellowish hue by default. This is a symptom of something called model collapse. AI systems trained on their own outputs gradually degrade in quality. The data is being fed to itself, slowly reaching piss average—a metaphor for both the colour and the quality.
The yellowing problem became so widespread that third-party tools emerged specifically to fix it: UnYellowGPT, Yellowtint.art, De-Yellow. Starting in March 2025, users documented this yellowish/sepia tint affecting ChatGPT-4o, DALL-E 3, and OpenAI’s Sora video model. You can now immediately recognize an image as AI-generated by its yellow bias.
Degradation goes deeper than colour correction. Oxford and Cambridge researchers published landmark findings in Nature demonstrating that when AI models train on AI-generated content, they undergo irreversible defects. The OPT-125m language model tested showed performance degrading within just nine generations, with outputs devolving from coherent text to complete gibberish. They started with a coherent prompt about architecture; by generation five it produced degraded lists of languages, and by generation nine it descended into nonsense.
Rice University researchers coined the perfect term for this. “Model Autophagy Disorder (MAD),” finding that models “go MAD” after approximately five iterations of training on artificially-created data. No different to mad cow disease’s prions corrupting biological systems through recursive consumption, AI models corrupt their statistical distributions through recursive training. The visual evidence is stark: first-generation face images appear diverse and realistic, but by the ninth generation they show significant quality collapse, repetitive features, and obvious artifacts.
Another term captures this, “Habsburg AI”. Coined by lecturer Jathan Sadowski. Like the Habsburg dynasty’s genetic deformities from centuries of cousin marriage, AI trained on AI outputs develops statistical deformities. Loss of diversity, bias amplification, and convergence toward narrow representations.
Because mediocrity is the best you’re going to get. The lowest common denominator can never be high enough. When you optimize for what appeals to the broadest possible audience with the least possible effort, you get content that offends no one and moves no one. You get the TikTokification of everything. Short, digestible, forgettable, and designed by algorithm rather than authored by humans.
A Mathematical End of Human-Only Spaces
I could try to make a new platform or community meant to be human-only. It’s impossible, though. AI is the worst it’s ever going to be right now, and the increase of sophistication and complexity makes any sort of detection increasingly impossible.
University of Maryland researchers have mathematically proven that detection accuracy with paraphrasing drops from 97% to 57%, effectively random chance andno better than a coin flip. Professor Soheil Feizi stated unequivocally that “at this point, it’s impossible. And as we have improvements in large-language models, it will get even more difficult.”
The tools don’t work. GPTZero claims 99% accuracy but achieves only 80% in independent testing, with a 35% false negative rate. Turnitin admits to missing approximately 15% of AI content while producing false positives that could wrongly accuse up to 8 innocent students in a class of 200. Even more notably, ZeroGPT flagged the U.S. Constitution preamble, Arthur Conan Doyle’s 1891 story “A Scandal in Bohemia” (76% AI), and George W. Bush’s 2008 State of the Union Address (93% AI) as machine-generated.
OpenAI itself shut down its own AI detector in July 2023 after achieving only a 26% true positive rate. When the creator of the AI cannot reliably detect its own outputs, the game is fundamentally over.
Major universities have abandoned detection entirely. Vanderbilt explicitly disabled Turnitin’s AI detector in 2023. The University of Pittsburgh announced no support for any AI detectors, citing “risk of loss of student trust, confidence and motivation, bad publicity, and potential legal sanctions.”
The human cost of false positives is real. Stanford research revealed the xenophobic bias that 61% of non-native English writing was flagged as AI-generated while native speakers were almost never falsely accused. The detectors mistake structured, predictable language, common among ESL writers and neurodivergent students, for machine output, systematically discriminating against marginalized groups.
What does the future hold? Europol estimates that 90% of online content may be synthetically generated by 2026. One year away. We’re not talking about a distant dystopia. We’re talking about next year.
Hmm…
Recently, I got a boost on one of my Medium posts and received a lot comments for the first time in a while. On an article about mise en place for writers, someone wrote:
“I really enjoyed this. There is something so grounding about borrowing mise en place from the kitchen and applying it to writing. I’ve always been more of a ‘spark first, chase the momentum’ person, but lately I’m learning how much calmer and more productive life becomes when the desk, the tools, and the little ritual are ready before the muse shows up.”
And on another article about the current state of the internet:
“Intense read, Brennan. But here’s my pushback: if the internet should terrify us, then why do we still treat it as a playground rather than a battleground? Maybe the real question isn’t how scary the system is, but why we continue to participate in its construction so willingly. Are we defenders of the status-quo, or unwilling engineers of our own entrapment?” I can’t prove it, but there’s just something that doesn’t sit right with these comments for me. There’s an uncanny valley, a hollowness, a no-name imitation of humanity; margarine instead of butter. The cadence is too polished, the insights too formulaic, the “pushback” too perfectly structured with rhetorical questions that sound profound but don’t actually engage with the specifics of what I wrote. The kind of comments that look like engagement but feel like performance. AI trained on what engagement is supposed to look like.
Even when I get what appears to be genuine human interaction, I can’t be sure anymore. Epistemic foundations have collapsed.
This crisis isn’t really about AI at all. It’s about faith. Faith that the person on the other end of the screen is real. Faith that the effort we put into crafting something meaningful will be met with actual human attention. Faith that the IndieWeb ideal of owning your own content and connecting directly with others can survive in an age of automated engagement farming.
There is the idea of surrender, of accepting this new normal, of simply writing my own good human work and assuming good faith that when I encounter other good work that it was made with time and effort, blood and sweat and tears. But is that to go gently into the dark night?
Genuine connection requires vulnerability, craft, and time. It requires showing up as yourself, not as an AI-optimized version of yourself, not as a carefully A/B-tested brand persona, but as a flawed and searching and human presence.
NFTs: We’ve Seen This Before
The pathetic and embarrassing NFT/blockchain trend that came right before GAI comes to mind. Unlike GAI and LLMs, NFTs provided NO actual value or solved any tangible problem. It was shocking and laughable how people tried to evangelize a .jpg and tried to explain how blockchain proved ownership and worth.
The NFT market peaked at $17 billion in 2021 with Bored Ape Yacht Club reaching floor prices of $429,000. By 2024, the market had collapsed 97%. 96% of 5,000+ NFT collections were “dead” with zero trading activity. Justin Bieber’s BAYC purchased for $1.31 million is now worth $59,090 (95% loss). Logan Paul’s Azuki NFT bought for $623,000 is now worth $10. Sina Estavi’s purchase of Jack Dorsey’s first tweet for $2.9 million received a best offer of $1,226.
I think of just how plainly bad NFT art was, purely from an aesthetic standpoint. Art critic Jerry Saltz called Beeple’s work “really really derivative Sci-Fi and Conan.” Dean Kissick described NFTs as “images of images… tired art, recycled pop, bad taste.” I think of NFT games that were garbage and dead-on-arrival. Team17 announced and canceled MetaWorms within 24 hours after a 96% dislike ratio. Ubisoft blamed gamers for “not getting it” after their NFT initiative was universally mocked.
The funniest thing is that there was no need for this, was there? Even if the technology wasn’t actually any sort of saving grace, the art easily could have looked good and been enticing. But it wasn’t. Because the kind of people that see this sort of thing as a solution do not have taste, they do not care to experience beauty and the sublime. There is too much contemplation and work required when money could be made.
The evangelism patterns mirror current AI hype with unsettling precision. NFT proponents promised “true digital ownership,” “democratization of art,” “empowering creators.” AI evangelism deploys similar rhetoric. Solving “all problems from climate change to world hunger,” representing “revolutionary breakthroughs.” Both rely on buzzword saturation. Both involve celebrity amplification. Celebrity promoters faced lawsuits after the NFT collapse. Jimmy Fallon, Justin Bieber, Madonna, Paris Hilton—many named in a December 2022 class action alleging the “company’s entire business model relies on using insidious marketing and promotional activities from A-list celebrities.”
Bill Gates said June 2022 that “NFTs are 100% based on greater fool theory,” investing where profits depend solely on finding someone willing to pay more, not intrinsic value. The core problem was that NFT ownership meant owning “a notation on the blockchain that says you own a pointer to some web server,” not the image, copyright, or intellectual property. Anyone could right-click and save the JPEG; ownership was a database entry.
There are parallels to AI art, and those who try to pretend that there is aesthetic merit or intrinsic worth in these outputs. But the same existential problem still remains: these people do not get why art exists or why we create it. Even with AI art that actually does achieve some semblance of aesthetic, it is still just as meaningless. It is created in bad faith, in trying to demonstrate that you created something you didn’t, that you are trying to enter cultural zeitgeist without having put the work in yourself to do so.
The Many Criticisms (& Why This One’s Worse)
Yes, there are material criticisms of AI that are more immediately measurable than the authenticity crisis. AI investment has $560 billion spent against only $35 billion in returns. Algorithmic bias shows 85% hiring preference for white-associated names versus 9% for Black names. Deepfake fraud caused over $200 million in losses during Q1 2025. Real, quantifiable harms with legal remedies and regulatory responses.
The environmental cost is that training GPT-4 required 50–62 GWh of electricity, over 40 times more energy than GPT-3. US data centers consumed approximately 17.5 billion gallons of water in 2023 for direct cooling, only to double or quadruple by 2028. Each 100-word AI prompt requires approximately 519 milliliters of water, one bottle’s worth.
And then there’s the human exploitation. OpenAI’s contract with Sama employed Kenyan workers at $1.32 to $2.00 per hour to label traumatic content including child sexual abuse, bestiality, rape, torture. TIME’s investigation documented workers developing PTSD stating “that was torture.” OpenAI abruptly terminated the contract eight months early, resulting in 200 workers unemployed despite severe mental health issues from the exposure.
These criticisms thankfully have enforcement mechanisms. You can sue for discrimination. You can regulate deepfakes. You can mandate privacy protections. But you cannot legislate authenticity back into existence once the technical capacity for verification vanishes.
The authenticity crisis operates in the realm of meaning, trust, and existential uncertainty. The harm is diffuse and philosophical, which is the erosion of trust when you cannot verify interlocutors’ humanity, the loss of meaning when creative work may be machine-generated, and the psychological toll of existing in spaces where authenticity becomes practically unverifiable. A Nature Human Behaviour study found up to 22% of computer science papers show evidence of LLM modification, with the word “delve,” a ChatGPT favourite, showing dramatic usage spikes.
Burst.
I hope the bubble pops and we move past this embarrassing time for humanity. I hope that people grow tired of the same mediocre obvious outputs of AI and it becomes a utility like a calculator rather than the supposed world-saver people pour trillions of dollars into making us believe it is.
MIT found that 95% of AI pilot projects fail to yield meaningful results despite over $40 billion in investment. OpenAI projects $44 billion in cumulative losses through 2028 despite a $500 billion valuation. 54% of institutional investors believe AI stocks are in a bubble.
The NFT collapse provides us a template. The entire ecosystem of speculation, celebrity endorsement, FOMO-driven investment, and solutions seeking problems collapsed. Nike acquired RTFKT for $200 million only to wind down operations while facing a $5 million lawsuit. Meta positioned themselves around metaverse and NFTs only to quietly removed NFT integration from Instagram. DraftKings shut down Reignmakers and settled $10 million in lawsuits.
The pattern is obvious. Intense hype, massive investment, rapid abandonment when returns fail to materialize.
The difference between AI and NFTs is that AI has some genuine utility. It can write code, analyze data, generate images. But the question is whether current valuations reflect reality or repeat the greater fool theory.
What gives me hope is that people eventually see through bullshit. Not immediately, I mean it took a few years for NFTs to collapse, but eventually. The mediocrity becomes too obvious. The outputs become too recognizable. The formulaic comments ring hollow. The LinkedIn broetry becomes a bigger joke than it already was. The engagement metrics reveal themselves as hollow and profane. The yellowish piss hue gives it all away.
And maybe, just maybe, we remember why we created things in the first place. Not for optimization or engagement farming or venture capital returns, but because we had something to say and believed someone else might want to hear it. Not because it was efficient, but because it was ours.
That’s the faith I’m choosing to hold onto. Faith that the work of creating something real and something human matters. Even if it gets buried under 90% synthetic slop noise. Even if detection becomes impossible. Even if the internet becomes majority bot.
To surrender to the automated future and accept authenticity is dead, and to participate in bad faith creation of meaningless content… that feels like the real death. True death. Not of the Internet, but of the reason we came here in the first place. To connect, to create, to be seen and heard and understood by other humans.
I hope the bubble pops. I hope we look back on this era with the same embarrassment we now reserve for NFT evangelists. I hope we rediscover that the work, the real and difficult and human work, was always the point.
Comments
To comment, please sign in with your website:
Signed in as:
No comments yet. Be the first to share your thoughts!