A woman in Texas thought she had landed a remote gig as a writing analyst. A week later, she found herself moderating violent and hateful content generated by a leading AI, a job she was never warned about and for which she received no training or support. Her story is a common one in the shadowy world of AI training, where the reality of the work is often a shocking departure from the job description.
The AI industry relies on a vast, contracted workforce to act as its sense of right and wrong. These “raters” are essential for teaching the model how to respond appropriately. However, this often means being the first to see the AI’s most disturbing outputs. Workers describe being confronted with everything from racial slurs to sexually explicit material, all under the banal heading of a “sensitivity task.”
The psychological toll of this work is significant, yet it is largely ignored by the companies that benefit from it. Raters report experiencing anxiety, panic attacks, and a sense of moral distress from both the content they review and the pressure to review it quickly. They are the human filters for the internet’s toxicity, but they are given no filter of their own.
This bait-and-switch employment practice highlights the deep disconnect between the polished image of AI and the messy reality of its development. The “intelligence” of these systems is not just coded; it is painstakingly curated by people who are often misled about the nature of their work and left to deal with the consequences alone.