Home » Insights » Stop letting your AI flatter you. Here’s how to fix it.

Stop letting your AI flatter you. Here’s how to fix it.

Most people using AI for feedback are getting a polished version of what they want to hear, not what they need to hear. Here’s a practical fix that works.

Alexandra Samuel, journalist, Wall Street Journal and Harvard Business Review contributor, and creator of the Me + Viv podcast, spent over a year working with a custom AI coach called Viv. In a recent episode of the Humans + AI podcast hosted by Ross Dawson, she shared a straightforward and immediately applicable approach to solving one of the most common problems in human-AI collaboration.

The problem is baked in. As Samuel explains: “The AIs are built on training data from a species that is pretty conflict-averse. There are a lot of models out there for them on telling us what we want to hear.” The result is an AI that defaults to encouragement when what you actually need is honest, challenging input.

Samuel’s solution is the “GRIT protocol”, an instruction to Viv to provide roughly 70% critical or constructive feedback and only 30% positive. It didn’t work automatically, but it shifted the baseline significantly.

Here’s how to put it into practice:

  1. Set the expectation upfront. Add an instruction to your AI telling it that constructive challenge is more valuable to you than validation. Specify a ratio if you can.
  2. Ask better questions mid-conversation. When your AI starts to flatter, redirect it. Try: “what am I not seeing here?” or “what would a critic say about this?”
  3. Keep pushing. Samuel is clear that it takes active prompting: “I can get her there” with the right nudge, and the payoff is significant. She describes receiving insights that “absolutely rang true” and genuinely shifted how she works.

Implementing this fix is straightforward. Samuel found the outcome rewarding.

Source: Humans + AI podcast, Episode 28, Alexandra Samuel

by | Mar 17, 2026