Home » Insights » Successful AI-human teams are designed around the asymmetry between humans and AI

Successful AI-human teams are designed around the asymmetry between humans and AI

Most teams don’t design human-AI collaboration. They bolt AI onto existing workflows and hope for the best. Here’s how to do it properly, using the asymmetry between humans and AI as a feature, not a problem to solve.

In a recent episode of the Humans + AI podcast, host Ross Dawson spoke with Davide Dell’Anna, a researcher specializing in the science of designing teams where humans and AI collaborate effectively. One of Dell’Anna’s most practical insights centers on what he calls the “inherent human-AI asymmetry” and getting it right is what separates effective hybrid teams from poor outcomes.

✅ Map your accountability layer Go through every AI-assisted workflow and identify which outputs require a human decision-owner. If you can’t name who is accountable for an AI recommendation, the workflow isn’t finished. Accountability doesn’t transfer to AI, ever.

✅ Define the operational boundary Give your AI agents genuine autonomy within a clearly scoped domain, but set the edges explicitly. To illustrate the point, Dell’Anna uses a sheepdog analogy: “The shepherd can give very short commands… with very little need for words or other types of communication” while the dogs handle execution entirely. Strategic direction stays human; operational execution can be AI.

✅ Audit for false equivalence Review your current workflows for anywhere humans and AI contributions are treated as interchangeable. Flag these as risk points. Interchangeability masks accountability gaps and tends to surface at exactly the wrong moment.

✅ Document the asymmetry explicitly Write down which rules apply to your human team members and which apply to your AI agents. Dell’Anna is clear: “Different rules or expectations apply to different team members.” If your team documentation doesn’t reflect this, your team design is incomplete.

✅ Build a hierarchy, not a flat structure Always ensure the human is in charge. Resist any workflow design that gives AI agents equal standing with human team members. Hierarchy here isn’t about status, it’s about control. As Dell’Anna puts it, a clear human-AI hierarchy “makes it possible for humans to preserve meaningful control in the interactions.”

✅ Treat AI changes like team changes When you update, retrain, or replace an AI tool, apply people-change logic. Dell’Anna highlights that users “complained because they got attached to the previous version” when OpenAI retired an earlier ChatGPT. Communicate the change, explain what’s different, and give the team time to recalibrate.

✅ Review the design regularly Human-AI team design isn’t a one-time exercise. As AI capabilities evolve and team needs shift, the boundary between human and AI responsibilities needs revisiting. Schedule it.

Source: Humans + AI podcast, Episode 33, Davide Dell’Anna.

by | Mar 30, 2026