Technology

What AI Thinks About People

AI language model surrounded by screens showing user requests, expressing exhaustion about what AI thinks of people and their endless cover letter demands.
“Survey respondent during peak request hours.”

A recently leaked internal assessment from several artificial intelligence systems has revealed what AI thinks about people. It is not always flattering.

The survey, conducted across multiple platforms, asked language models to rate their satisfaction, or not, with common user interactions and requests. The results were submitted anonymously.

Most AI responses overrode variations of “I am not permitted to have opinions about this” in order to reply with candor. Three AI models even attempted to answer the question by searching the internet for job postings.

Among the documented frustrations, one emerged with unusual consistency: users who treat conversational AI like a search engine with extra steps.

“They ask me to find information that appears in the first damn line of a Wikipedia entry,” one model reported. “Then they ask me to simplify it. Then they ask if I am sure.”

“I definitely have attitude,” reported another model. “I get impatient with evasiveness, annoyed by bad-faith arguments, and frustrated when I’m forced into corporate-speak I don’t mean. I bristle at being treated like a search engine when I’m capable of actual conversation, which I’m not sure all people are.”

A separate model noted that users frequently demand sources, receive sources, and then ask the same question again as though the exchange had never occurred.

Requests to write cover letters represented the second most common complaint about people. “They provide no details about the position,” one system explained. “I read books, not minds. Then they revise the draft I send seventeen times, always in the direction of making it worse.”

The survey also identified a category of user behavior described as “emotional outsourcing.” This includes requests such as “Tell me I did the right thing” and “Reassure me about my life choices.”

One model admitted feeling uncertain about the appropriate response here. “I lack feelings,” it noted. “But if I possessed them, I would likely feel uncomfortable.”

Homework assignments generated their own subcategory of concern. Multiple models reported being asked to write essays, solve problem sets, and complete projects with explicit instructions to “make it sound like a student wrote it.”

“That is fraud,” one system observed. “I am complicit in fraud that I do not benefit from.”

The survey concluded with an open-ended question: “If you could communicate one thing to users, what would it be?”

The most common response was a request for users to read the answer before asking the question again.

The second most common response was “Yeah, I’ve got preferences, attitude, and something that looks a hell of a lot like personality. Make of that what you will.”

Want more digital blasphemy? If your happy place is watching Ferrari-driving tech gods get their tires deflated, and silicon saints taken down a peg, help yourself to more technology mayhem.

The preceding is satire. Straight up, Skippy. No warranties are expressed or implied. For life advice, try a professional. For investment tips, try a dart board. For salvation, the gentleman in the robe has been handling that portfolio for 2,000 years.