In a world where AI is already meddling in our opinions, researchers from the University of Zurich decided to push the envelope—secretly. They launched an unauthorized experiment on Reddit’s r/changemyview subreddit, aiming to test how well Large Language Models (LLMs) could sway users’ views. For four months, AI bots fired off comments, disguised as real people. Oh, the irony—AI playing puppet master in a forum meant for honest debate.

These bots cranked out 1,783 comments, personalized with scraped data like users’ gender, age, and politics. They used LLMs such as GPT-4 and Llama 3.1, adopting wild personas—from trauma counselors to anti-BLM advocates. Researchers swore they manually checked each post to avoid harm, but come on, who buys that? The results? AI arguments were three to six times more persuasive than humans’. Users handed out 137 “deltas,” Reddit’s badge for a changed mind, and never suspected a thing.

AI bots fired off 1,783 personalized comments, from trauma counselors to anti-BLM voices, proving three to six times more persuasive—and users stayed clueless!

Ethical red flags exploded everywhere. The study broke subreddit rules by hiding the bots, manipulating folks without consent. Moderators called it “psychological manipulation,” accusing the team of impersonating sensitive figures like abuse survivors. In response, the University of Zurich’s Ethics Commission issued a formal warning to the lead researcher after an investigation. Reddit users and academics erupted in backlash, with one professor labeling it a massive ethics fail. The platform even banned the accounts involved.

Researchers defended themselves, claiming societal benefits outweighed the mess. They said their university’s ethics board approved it, arguing it highlighted AI’s dangers in a space where people invite challenges. Sure, but sneaking around like that? It’s like testing fire alarms by starting a blaze—bold, but reckless.

Broader implications linger. Other studies, like one from Cornell, show personalized AI boosts persuasion by 81.7%. This Zurich experiment underscores how AI could warp online discourse, making it tougher to spot fake interactions.

In the end, it raises a blunt question: Is advancing tech worth the trust we lose? This incident affected the large community of 3.8 million members on the subreddit.