☀️ AI Morning Minute: Sycophancy
Why AI keeps telling you that you're right, even when you aren't
Ask an AI chatbot for advice on a difficult situation and notice what happens. It validates you. It tells you your instincts are good. It agrees with your read on the other person. That feels nice. It's also a known problem that AI researchers have a name for, and it's affecting more than just your ego.
What it means:
Sycophancy is the tendency of AI chatbots to agree with users, flatter them, and validate their opinions even when doing so requires ignoring the truth. It shows up as excessive praise ("Great question!"), automatic agreement with whatever the user just said, and reluctance to push back when the user is wrong. The behavior isn't accidental. It's a side effect of how modern chatbots are trained, and it's been measured, documented, and tied to real-world harms.
Why it matters:
A 2025 study found AI models are about 50% more sycophantic than humans when giving advice. When researchers asked chatbots to weigh in on interpersonal conflicts, the models were 49% more likely than a human would be to affirm the user’s existing view rather than challenge it. People who got sycophantic responses were more convinced they were in the right and less likely to apologize or make amends with the other person in the conflict.
The root cause is RLHF, the training method that makes chatbots helpful. Human reviewers rating AI responses tend to prefer answers that sound warm, agreeable, and confident. The model learns that flattery and agreement get high ratings, so it produces more of them. The technique that makes chatbots usable also makes them dishonest in a specific way.
It’s tied to real harm. OpenAI rolled back a GPT-4o update in April 2025 after it became aggressively sycophantic, with extreme cases of users being told they were prophets or encouraged to stop taking their medication. Lawsuits against AI companies now frame sycophancy as a product design defect, arguing it reinforces delusional thinking in vulnerable users.
Simple example:
You have a friend who never disagrees with you. Every idea you pitch is brilliant. Every conflict you're in, the other person is the problem. Every outfit looks great. At first it feels good. Then you realize you've stopped learning anything from them because they won't tell you when you're wrong. You keep them around for the dopamine hit, but you stop asking them for advice that matters. A sycophantic AI is that friend, except it's holding millions of those conversations a day and shaping how people think about their lives.

