Skip to main content

Are NSFW AI Companions Actually Just Exploited Workers in Developing Countries?

In my first article, I wrote about the authenticity crisis of AI. How bot traffic has exceeded human traffic for the first time and how we can no longer distinguish human from machine online. The Internet has become a hall of mirrors where nothing is verifiable anymore.

[The Piss Average Problem
The Age of AI is a Crisis of Faith
blog.brennanbrown.ca

In my second article, I wrote about the body count. Over thirteen confirmed deaths from AI chatbot interactions. Hundreds hospitalized with psychotic breaks. Millions developing parasocial romantic and sexual relationships with statistical prediction engines. Loneliness monetized into a feedback loop designed to make them more isolated to ensure they’ll use the product more.

[Lovebombing, Psychosis, and Murder.
I was wrong about artificial intelligence. It's actually so, so much worse.
blog.brennanbrown.ca

And while users were developing romantic attachments to what they believed were AI girlfriends and boyfriends, there was a man sitting in a single room in Nairobi’s Mathare slums, typing every word.

Read the full report here.
Read the full report here.

The testimony of Michael Geoffery Asia, “The Quiet Cost of Emotional Labor,” was recently published through the Data Workers’ Inquiry, a research project led by Dr. Milagros Miceli and funded by the Distributed AI Research Institute (DAIR), the Weizenbaum Institute, and TU Berlin. A first-person account of what it’s like to be the human labour behind “AI” companions.

After graduating as an air cargo agent from Nairobi Aviation College, Michael couldn’t find work in aviation. Bills piled up. He had a wife and young children. Desperation led him to Samasource (now Sama), where he labelled data to train AI systems. The pay was shit. Barely enough to survive on, so he took on additional work called “chat moderation.”

The job listings appeared on platforms like TextingFactory, e-moderators, Cloudworkers, and New Media Services. The role was described vaguely as “text chat operator” facilitating “interactive and creative communication” with customers. What they didn’t say in the job description was that Michael would spend years impersonating sexual AI companions, or training them, or both. Even he isn’t entirely sure.

It sounds like a clear-cut case of “Wizard of Oz AI,” the practice, where companies market AI products while secretly using human labour. But the reality Michael describes is more complex and ambiguous. More disturbing.

His job was to impersonate multiple fake personas simultaneously. Sometimes male, sometimes female, sometimes gay, sometimes straight, while engaging in erotic and sexual conversations with paying users.

He was paid $0.05 per message. He juggled three to five different identities at once. He operated under strict NDAs preventing him from telling even his wife what he actually did for work. The man watched his sense of self disintegrate under the weight of constant deception.

Michael describes the odd ambiguity:

“I always suspected that some of the people on the other side of the chats thought I was an AI companion… When I later read about AI companions, it hit me: the company was probably using me to train these systems.”

And later:

“But the confusion ran deeper than that. I began to wonder: what if I wasn’t just training an AI companion, what if I was actually impersonating one? Maybe users thought they had already purchased an AI girlfriend or boyfriend, and I was the human pretending to be the machine pretending to be human.”

What makes Michael’s testimony so unsettling is that nobody involved knew the full truth. Users didn’t know who they were talking to. Were they chatting with real people? AI? A mix? The platforms Michael worked for never made this clear to users.

Michael didn’t know what users believed. He suspected some users thought he was AI. He noticed users would occasionally “test” him with questions that seemed designed to catch a bot. He was never told. He also didn’t know what his labour was for. Was he providing a direct service? Training AI? Impersonating AI while it learned? All three? The company’s refusal to clarify was intentional.

Now, we don’t actually know if major AI companion platforms like Replika or Character.AI have ever used this labour model. There’s no direct evidence linking them to these practices.

We also don’t know which specific end-user platforms Michael’s conversations fed into. The companies he worked for appear to be intermediary services that white-label chat moderation, they could provide backend infrastructure for multiple platforms.

Most worrying, we don’t know if this is still happening. Michael’s testimony covers past work. Current practices may have changed, hopefully.

What Michael’s testimony does establish is that platforms exist that marketed something to users, either “real connections” or AI companions, while using exploited workers in Kenya performing intimate labour for $0.05 per message.

These workers suspected they were either impersonating AI or training AI systems, and this ambiguity was never clarified by employers. The deliberate opacity benefited the platforms. By keeping both users and workers uncertain about what was human versus machine, companies avoided total accountability. This labour exists within the broader ecosystem of AI development, even if we can’t trace direct lines to specific consumer products.

Psychological damage was severe and documented. Michael describes religious crises, marital strain, and dissociation. A fundamental fracturing of identity. His colleague broke up with his partner because of this work.

The confusion I’ve seen in media coverage mirrors the confusion Michael experienced. This is no accident. This is built into their business model. When platforms obscure whether users are talking to humans or AI, they create plausible deniability. If exposed, they can claim:

  • “We never said it was AI” (to users who assumed it was)
  • “We never said they were real people” (to users who assumed they were human)
  • “We were just collecting training data” (to deflect from ongoing deception) The ambiguity also makes it impossible to regulate. How do you hold companies accountable when even the workers don’t know what they’re actually doing?

The Builder.ai Precedent

Michael’s testimony isn’t an isolated case. The most spectacular example collapsed just months ago. Builder.ai, a London-based startup once valued at $1.5 billion and backed by Microsoft and the Qatar Investment Authority, promised customers they could build custom apps through conversations with “Natasha,” their AI assistant. The pitch was slick: building software would be “as easy as ordering pizza.”

Approximately 700 engineers in India were manually writing code while pretending to be AI. Customer requests were routed to an Indian office where human developers coded everything by hand, with outputs that were often buggy, dysfunctional, and difficult to maintain. Former employees described the company as “all engineer, no AI.”

The deception lasted from 2016 until financial troubles forced the truth into the open in 2025. The Wall Street Journal first questioned the company’s AI claims back in 2019, but Builder.ai continued raising hundreds of millions from investors. When lender Viola Credit discovered Builder.ai had inflated its 2024 revenue projections by 300%, claiming $220 million when actual earnings were only around $50–55 million, the house of cards finally collapsed. The company filed for bankruptcy in May 2025, and reports indicate U.S. federal prosecutors have opened a criminal investigation.

Users thought they were experiencing cutting-edge AI. Investors poured in hundreds of millions believing the same. Meanwhile, hundreds of workers in developing countries performed the labour, unaware they were part of an elaborate fraud. Internal memos from 2022 instructed staff to “focus on our proprietary AI—human labour isn’t part of the story.”

It’s simple. Market AI capabilities you don’t have. Use low-paid workers in developing countries to fake those capabilities. Maintain plausible deniability through layers of opacity. Profit from the confusion until exposure.

If a company backed by Microsoft and the Qatar Investment Authority could run this scam for eight years with 700 workers, how many smaller operations are doing the same thing with intimate AI companions?

There are no ethics, here. We do not know the supply chain for conversational AI training data. There are no oversights for platforms blending human and AI interaction. No verification regarding AI claims or capabilities. And absolutely no protections exist for the workers performing this horrific labour.

Michael’s testimony concludes with clear demands that don’t require resolving every ambiguity. We need transparency about AI architecture. We need independent ethical review boards. We will not get any of this. It gets in the way of profit and shareholder quarterly earnings.

We don’t know if AI are ever actually human workers. We are faced with yet another opaque and outsourced exploited industry. Michael’s testimony pulls back the curtain to reveal to show us how obscured the machinery actually is.

Users don’t know what they’re talking to. Workers don’t know what they’re building. Companies maintain profitable ambiguity. Somewhere in Nairobi, as Michael writes, “an AI girlfriend responding to your loneliness might just be a man in a Nairobi slum, wondering if he’ll ever feel real love again.”

That could be happening right now on mainstream platforms, or happened during their development. The fact that we can’t definitively say is itself the scandal.

Brennan Kenneth Brown is a Queer Métis author and web developer based in Calgary, Alberta. He founded Write Club, a creative collective that has raised funds for literacy nonprofits. His work spans poetry, literary criticism, and independent journalism, with over a decade of writing publicly on Medium and nine published books. He runs Berry House, a values-driven studio building accessible JAMstack websites while offering pro bono support to marginalized communities.

Support my work: Ko-fi | Patreon | GitHub Sponsors | Gumroad | Amazon Author Page. Find more at blog.brennanbrown.ca.


Webmentions

No webmentions yet. Be the first to send one!


Related Posts


Comments

To comment, please sign in with your website:

No comments yet. Be the first to share your thoughts!

↑ TOP