July 16, 2025

Jacob Taylor on collective intelligence for SDGs, interspecies money, vibe-teaming, and AI ecosystems for people and planet (AC Ep10)

“If we’re faced with problems that are moving fast and require collective solutions, then collective intelligence becomes the toolkit we need to tackle them.”

– Jacob Taylor

Robert Scoble

About Jacob Taylor

Jacob Taylor is a fellow in the Center for Sustainable Development at Brookings Institution, and a leader of its 17 Rooms initiative, which catalyzes global action for the Sustainable Development Goals. He was previously research fellow at the Asian Bureau of Economic Research and consulting scientist on a DARPA research program on team performance. He was a Rhodes scholar and represented Australia in Rugby 7s for a number of years.

Website:

Jacob Taylor

Jacob Taylor

Jacob Taylor

LinkedIn Profile:

Jacob Taylor

X Profile:

Jacob Taylor

What you will learn

  • Reimagining Team Performance Through Collective Intelligence

  • Using 17 Rooms to Break Down the SDGs Into Action

  • Building Rituals That Elevate Learning and Challenge Norms

  • Designing Digital Twins to Represent Communities and Ecosystems

  • Creating Interspecies Money for Elephants, Trees, and Gorillas

  • Exploring Vibe Teaming for AI-Augmented Collaboration

  • Envisioning a Bottom-Up AI Ecosystem for People and Planet

Episode Resources

Transcript

Ross Dawson: Jacob, it is awesome to have you on the show.

Jacob Taylor: Ross, thanks for having me.

Ross: So we met at Human Tech Week in San Francisco, where you were sharing all sorts of interesting thoughts that we’ll come back to. What are your top-of-mind reflections of the event?

Jacob: Look, I had a great week, and largely because of all the great people I met, to be honest. And I think what I picked up there was people really driving towards the same set of shared outcomes.

Really people genuinely building things, talking about ways of working together that were driving at outcomes for, ultimately, for human flourishing, for people and planet. 

And I think that’s such an important conversation to have at the moment, as things are moving so fast in AI and technology, and sometimes it’s hard to figure out where all of this is leading, basically.And so to have humans at the center is a great principle. 

Ross: Yeah, well, where it’s leading is where we take it. So I think having the humans at the center is probably a pretty good starting point.

So one of the central themes of this blog—for this podcast for ages—has been collective intelligence. And so you are diving deep into applying collective intelligence to achieve the Sustainable Development Goals, and I would love to hear more about what you’re doing and how you’re going about it.

Jacob: Yeah, so I mean, very quickly, I’m an anthropologist by training. I have a background in elite team performance as a professional rugby player, and then studying professional team sport for a number of years.

So my original collective is the team, and that’s kind of my intuitive starting point for some of this. But teams are very well built to solve problems that no individual can achieve alone, and really a lot of the SDG problems that we have—issues that communities at every scale have trouble solving on their own—need a whole community to tackle a problem, rather than just one individual or set of individuals within a community.

So the SDGs are these types of—whether it’s climate action or ending extreme poverty or sustainability at the city level—all of these issues require collective solutions. And so if we’re faced with problems that are moving fast and require collective solutions, then collective intelligence becomes the toolkit or the approach that we need to use to tackle those problems.

I’ve been thinking a lot about this idea that in the second half of the 20th century, economics as a discipline went from pretty much on the margins of policymaking and influence to right at the center. By the end of the 20th century, economists were at the heart of informing how decisions were made at the country level, at firms, and so on. That was because an economic framework really helped make those decisions.

I think my sense is that the problems we face now really need the toolkit of the science of collective intelligence. So that’s kind of one of the ideas I’ve been exploring—is it time for collective intelligence as a science to really inform the way we make decisions at scale, particularly for our hardest problems like the SDG.

Ross: One of your initiatives—so at Brookings Institution, one of the initiatives is 17 Rooms. I’m so intrigued by the name and what that is and how that works.

Jacob: Yeah. So, 17 Rooms. We have 17 Sustainable Development Goals, and so on. Five or so years ago now—or more, I think it’s been running for seven or eight years now—17 Rooms thought: what if we found a method to break down that complexity of the SDGs?

A lot of people talk about the SDGs as everything connected to everything, which sometimes is true. There are a lot of interlinkages between these issues, of course. But what would it look like to actually break it down and say, let’s get into a room and tackle a slice of one SDG?

So Room 1: SDG 1 for ending extreme poverty. Let’s take on a challenge that we can handle as a team.

And so 17 Rooms gathers groups of experts into working groups—or short-term SWAT teams of cooperation, basically—and really gets them to think through big ideas and practical next steps for how to bend the curve on that specific SDG issue.

Then there’s an opportunity for these rooms or teams to interact across issues as well. So it provides a kind of “Team of Teams” platform for multi-stakeholder collaboration within SDG issues, but also connecting across the full surface of these problems as well.

Ross: So what from the science of collective intelligence—or anything else—what specific mechanisms or structures have you found useful? Are you trying to enable the collective intelligence within and across these rooms or teams?

Jacob: Yeah, so I think—I mean, they’re all quite basic principles. We do a lot on trying to curate teams and also trying to run them through a process that really facilitates collaboration. But the principles are quite basic, really.

I mean, one of the most fundamental principles is taking an action stance. One of the biggest principles of collective intelligence is that intelligence comes from action. This is a principle we get from biology. In biology, biology acts first and then learns on the run. So you don’t kind of sit there and go, what kind of action could we take together as a multicellular organism—rather, it just unfolds, and then learning comes off the back of that action.

So in that spirit, we really try to gear our teams and rooms into an action stance, and say, rather than just kind of pointing fingers at all the different aspects of the problem, let’s say: what would it look like for us in this room to act together? And then, what could we learn from that?

Trying to get into that stance is really foundational to the 17 Rooms initiative.

And then I think the other part is really bonding or community—so knowing that action and community are two sides of the same coin. When you act together, you connect and you share ideas and information. But likewise, communities of teams that are connected are probably more motivated to act together and to be creative and think beyond just incentives. But like, what can we really achieve together?

And so we try to pair those two principles together in everything that we do.

Ross: So this comes back to this point—there’s many classic frameworks and realities around acting and then learning from that. So your OODA Loop, your observe, orient, decide, act, or your Lean Startup loop, or Kolb’s learning cycle, or whatever it might be, where we act, but we only learn because we have data or insight.

So that’s a really interesting point—where we act, but then, particularly in a collective intelligence perspective, we have all sorts of data we need to filter and make sense of that not just individually, but collectively—in order to be able to understand how it is we change our actions to move more towards our outcomes.

Do you have any structures for being able to facilitate that flow of feedback or data into those action loops?

Jacob: Yeah, I think—and again, I’m very biased as an anthropologist here—so the third principle that we think about a lot, and that answers your question, is this idea of ritual.

We’re acting, we’re connecting around that action, and that’s a back-and-forth process. But then rituals actually are a space where we can elevate the best ideas that are coming out of that process and also challenge the ideas that aren’t serving us.

Famously across time for humans, ritual has been an opportunity both to proliferate the best behaviors of a society, but also to contest the behaviors that aren’t serving performance. Ultimately—you don’t always think about this in performance terms—but ultimately, when you look at it big picture, that’s what’s happening.

So I think rituals of differentiation between the data that are serving us versus not, I think is really important for any team, organization, or community.

Ross: That’s really interesting. Could you give an example of a ritual?

Jacob: Well, so there are rituals that can really—like walking on hot coals. Again, let’s start anthropological, and then maybe we can get back to collective intelligence or AI.

Walking on hot coals promotes behaviors of courageousness and devotion. Whereas in other settings, you have a lot of rituals that invert power structures—so men dressing up as women, women dressing up as men, or the less powerful in society being able to take on the behaviors of the powerful and vice versa.

That actually calls out some of the unhelpful power asymmetries in a society and challenges those.

So in that spirit, I think when we’re thinking about high-performing teams or communities tackling the SDGs, I think there needs to be more than just… I’m trying to think—how could we form a ritual de novo here?

But really, there needs to be, I guess, those behaviors of honesty and vulnerability as much as celebration of what’s working. That maybe is easier to imagine in an organization, for example, and how a leader or leaders may try to really be frank about the full set of behaviors and activities that a team is doing, and how that’s working for the group.

Ross: So you’ve written a very interesting article referring to Team Human and the design principles that support—including the use of AI—and being able to build better team performance. So what are some of the design principles?

Jacob: Well, I think this work came a little bit out of some DARPA work I did on a DARPA program before coming to Brookings around building mechanisms for collective intelligence. And when you boil it down to that fundamental level, it really comes down to having a way to communicate between agents or between individuals, which in psychology is referred to—the jargon in psychology is theory of mind.

So, do I have a theory of Ross—what you want—and do you have a theory of what I want? That’s basically social intelligence. It’s the basic key here.

But it really comes down to some way of communicating across differences. And then with that, the other key ingredient that we surfaced when we built a computational model of this, in a basic way, was an ability to align on shared goals.

So it feels like there’s some combination of social intelligence and shared goals that is foundational to any collective intelligence that emerges in teams or organizations or networks. And so trying to find ways to build those—whether that’s at the community level…

For example, if a city wants to develop its waste recycling program—but if you break that down, it really is a whole bunch of neighborhoods trying to develop recycling purposes. So the question for me is: do all those neighborhoods have a way of communicating to each other about what they’re doing in service of a shared goal of, let’s say, a completely circular recycling economy at the city level?

And if not, then what kind of interaction and conversations need to happen at the city level so that you can share best practices, challenge practices that are hurting everyone, and then find a way to drive collective action towards a shared outcome. But I’d also think about that, like, at the team level, where there are ways to really encourage theory of mind and perspective sharing.

Ross: So, in some of that work, you refer to digital twins—essentially being able to model how people might think or behave. If you are using digital twins, how is that put into practice in being able to build better team performance?

Jacob: Yeah, great. Yeah, that’s probably really where the AI piece comes in.

Because that recycling-at-the-city-level example that I shared—this kind of collective intelligence happens without AI.

But the promise of AI is to say, well, if you could actually store a lot of information in the form of digital twins that represented the interests and activities of, let’s say, neighborhoods in a city trying to do recycling—

Well, then beyond our human cognition, you could be trying to look for patterns and opportunities for collaboration by leveraging the power of AI to recognize patterns and opportunities across diverse data sets.

The idea is you could kind of try to supercharge the potential collective intelligence about problem-solving by positioning AI as a team support—or a digital twin that could say, hey, actually, if we tweak our dials here and use this approach, that could align with our neighbor’s approach, and maybe we should have a chat about it.

So there’s an opportunity to surface patterns, but then also potentially perform time-relevant interventions for human decision-makers to help encourage better outcomes.

Ross: I think you probably should try a different phrase, because “digital twin” sounds like you’ve got a person, then you’ve got a copy of that person.

Whereas you’re describing it here as representing—could be a neighborhood, or it could be a stakeholder group. So it’s essentially a representation, or some kind of representation, of the ways of thinking or values of a group, potentially, or community, as opposed to an individual.

Jacob: Indeed, yeah. I think this is where it all gets a bit technical, but yeah, I agree that “twin”—”digital twin”—evokes this idea of an individual body.

But if you extend that out, when you really take seriously some of the collective intelligence work, it’s like intelligence collectives become intelligent when they become a full thing, like a body—when they really individuate as a collective.

Teams really click and perform when they become one—so that it’s no longer just these individual bodies. It’s like the team is a body.

So I think in that spirit, when I think about this, I actually think about neighborhoods having a collective identity. That could be reflected in their twin, or like, of the community.

But I agree there’s maybe some better way to imagine what that kind of community AI companion looks like at higher scales.

Ross: So at Human Tech Week, you shared this wonderful story about how AI could represent not just human groups, but also animal species.

Love to—I think that sort of really brings it to—it gives it a very real context, because you’re understanding that from another frame.

Jacob: Yeah. And I think it’s true, Ross.

I’ve been struck by how much this example of interspecies money—that I’ll explain a little bit—is not only exciting because it has potential benefit for nature and the beautiful natural environment that we live in, but I think it actually helps humans understand what it could look like to do it for us too.

And so, interspecies money, basically, is this idea developed by a colleague of ours at Brookings, Jonathan Ledger. We had a room devoted to this last year in 17 Rooms to try and understand how to scale it up.

But what would it look like to give non-human species—like gorillas, or elephants, or trees—a digital ID and a bank account, and then use AI to reverse engineer or infer the preferences of those animals based on the way they behave?

And then give them the agency to use the money in their bank account to pay for services.

So if gorillas, for example, most rely on protection of their habitat, then they could pay local community actors to protect that habitat, to extend it, and to protect them from poachers, for example.

That could all be inferred through behavioral trace data and AI, but then also mediated by a trustee of gorillas—a human trustee.

It’s quite a futuristic idea, but it’s actually really hit the ground running. At the moment, there are pilots with gorillas in Rwanda, elephants in India, and ancient trees in Romania.

So it’s kind of—the future is now, a little bit, on this stuff.

I think what it really does is help you understand: if we really tried to position AI in a way that helps support our preferences and gives agency to those from the bottom up, then what?

What world would that look like?

And I think we could imagine the same world for ourselves. A lot of our AI systems at the moment are kind of built top-down, and we’re the users of those systems.

What if we were able to build them bottom-up, so that at every step we were representing individual, collective, community interests—and kind of trading on those interests bottom-up?

Ross: Yeah, well, there’s a lot of talk about AI alignment, but this is, like, a pretty deep level of alignment that we’re talking, right?

Jacob: Right.

And yeah, I think Sandy Pentland, who I shared the panel with—he has this idea of, okay, so there are large language models.

What would it look like to have local language models—small language models that were bounded at the individual.

So Ross, you had a local language model, which was the contents of your universe of interactions, and you could perform inferences using that.

And then you and I could create a one-plus-one-plus-one-equals-three kind of local language model, which was for some use case around collective intelligence.

This kind of bottom-up thinking, I think, is actually technically very feasible now.

We have the algorithms, the understanding of how to train these models. And we also have the compute—in devices like our mobile phones—to perform the inference.

It’s really just a question of imagination, and also getting the right incentives to start building these things bottom-up.

Ross: So one of the things you’ve written about is vibe teaming.

We’ve got vibe coding, we’ve got viable sorts of things. You and your colleagues created vibe teaming.

So what is it? What does it mean? And how do we do it?

Jacob: Good question.

Yeah, so this is some work that a colleague of mine, Kirsch and Krishna, and I at Brookings did this year.

We got to a point where, with our teamwork—you know, Brookings is a knowledge work organization, and we do a lot of that work in teams. A lot of the work we do is to try and build better knowledge products and strategies for the SDGs and these types of big global challenges.

The irony was, when we were thinking about how to build AI tools into our workflow, we were using a very old-school way of teaming to do that work.

We were using this kind of old industrial model of sequential back-and-forth workflows to think about AI—when AI was probably one of the most, potentially the most, disruptive technologies of the 21st century.

It just felt very ironic. To do a PowerPoint deck, Ross, you would give me the instructions. I would go away and draft it. I would take it back to you and say, “Is this right?” And you would say, “Yes, but not quite.”

So instead, we said, “Wait a minute. The internet is blowing up around vibe coding,” which is basically breaking down that sequential cycle.

Instead of individuals talking to a model with line-by-line syntax, they’re giving the model the vibe of what they want.

We’re using AI as this partner in surfacing what it is we’re actually trying to do in the first place.

So Kirsch and I said, “Why don’t we vibe team this?”

Why don’t we get together with some of these challenges and experts that we’re working with and actually get them to tell us the vibe of what they’ve been learning?

Homie Karas is a world expert—40-year expert—on ending extreme poverty. We sat down with him, and in 30 minutes, we really pushed him to give us, like:

“Tell us what you really think about this issue. What’s the really hard stuff that not enough people know about? Why isn’t it working already?”

These kinds of questions.

We used that 30-minute transcript as a first draft input to the model. And in 90 minutes, through interaction with AI—and some human at the end to make sure it all looked right and was accurate—we created a global strategy to end extreme poverty.

That was probably on par with anything that you see—and probably better, in fact, than many global actors whose main business is to end extreme poverty.

So it’s an interesting example of how AI can be a really powerful support to team-based knowledge work.

Ross: Yeah, so just—I mean, obviously, this is you.

You are—the whole nature of the vibe is that there’s no explicit, well, no specific, replicable structure. We’re going with the vibes.

But where can you see this going in terms of getting a group of complementary experts together, and what might that look like as the AI-augmented vibe teaming?

Jacob: Well, I mean, you’re right. There was a lot of vibe involved, and I think that’s part of the excitement for a lot of people using these new tools.

However, we did see a few steps that kept re-emerging. I’ve mentioned a few of them kind of implicitly here, but the big one—step one—was to really start with rich human-to-human input as a first step.

So giving the model a 30-minute transcript of human conversation versus sparse prompts was a real game changer for us working with these models.

It’s almost like, if you really set the bar high and rich, then the model will meet you there—if that makes sense.

Step two was quickly turning around a first draft product with the model.

Step three was then actually being patient and open to a conversation back and forth with the model.

So not thinking that this is just a one-button-done thing, but instead, this is a kind of conversation—interaction with the model.

“Okay, so that’s good there, but we need to change this.”
“Your voice is becoming a little bit too sycophantic. Can you be a bit more critical?”

Or whatever you need to do to engage with the model there.

And then, I think the final piece was really the need to go back and meet again together as a team to sense-check the outputs, and really run a rigorous human filter back over the outputs to make sure that this was not only accurate but analytically on point.

This idea that sometimes AI looks good but smells bad—and with these outputs, sometimes we’d find that it’s like, “Oh, that kind of looks good,” but then when you dig into it, it’s like, “Wait a minute. This wasn’t quite right here and there.”

So just making sure that it not only looks good but smells good too at the end.

Yeah. And so I think these basic principles—we’re seeing them work quite well in a knowledge work context.

And I guess for us now, we’re really interested in a two-barrel investigation with approaches like vibe teaming.

On the one hand, it’s really about the process and the how—like, how are we positioning these tools to support collaboration, creativity, flow in teamwork, and is that possible?

So it’s really a “how” question.

And then the other question for us is a full “what.” So what are we pointing these approaches at?

For example, we’re wondering—if it’s ending extreme poverty, how could we use vibe teaming to actually…

And Scott Page uses this term—how can we use it to expand the physics of collective intelligence?

How can we run multiple vibe teaming sessions all at once to be much more inclusive of the types of people who participate in policy strategy formation?

So that when you think about ending extreme poverty, it’s ending it for whom? What do they want? What does it look like in local communities, for example?

That idea of expanding the physics of collective intelligence through AI and approaches like vibe teaming is very much on our minds at the moment, as we think about next steps and scale-up.

Ross: Obviously, the name of the podcast is Humans Plus AI, and I think what you’re describing there is very much the best of humans—and using AI as a complement to draw out the best of that.

Nice segue—you just sort of referred to “where next steps.”

You’ve described a lot of the wonderful things you’re doing—some fantastic approaches to very, very critically important issues.

So where to from here? What’s the potential? What are the things we need to be doing? What’s the next phase of what you think could be possible and what we should be doing?

Jacob: Yeah, I think I’m really excited about this idea of growing an alternate AI ecosystem that works for people and planet, rather than the other way around.

Part of the work at Brookings is really setting up that agenda—that research agenda—for what that ecosystem could look like.

We discussed it a little bit together at Human Tech Week.

I think of that in three parts.

There’s the technical foundation—so down to the algorithms and the architectures of AI models—and thinking about how to design and build those in a way that works for people.

That includes, for example, social intelligence built into the code.

Another example there is around, in a world of AI agents—are agents working for humans, or are they working for companies?

Sandy Pentland’s work on loyal agents, for example—which maybe we could link to afterward—I think is a great example of how to design agents that are fiduciaries for humans, and actors for humans first, and then others later.

Then, approaches like vibe teaming—ways of bringing communities together using AI as an amplifier.

And then I think the key piece, for me, is how to stitch the community of actors together around these efforts.

So the tech builders, the entrepreneurs, the investors, the policymakers—how to bring them together around a common format.

That’s where I’m thinking about a few ideas.

One way to try to get people excited about it might be this idea of not just talking about it in policy terms or going around to conferences.

But what would it look like to actually bring together a lab or some kind of frontier research and experimentation effort—where people could come together and build the shared assets, protocols, and infrastructures that we need to scale up great things like interspecies money, or vibe teaming, or other approaches?

Where, if we had collective intelligence as a kind of scientific backbone to these efforts, we could build an evidence base and let the evidence base inform new approaches—trying to get that flywheel going in a rigorous way.

Trying to be as inclusive as possible—working on everything from mental health and human flourishing through to population-level collective intelligence and everything in between.

Ross: So can you paint that vision just a little bit more precisely?

What would that look like, or what might it look like?

What’s one possible manifestation of it? What’s the—

Jacob: Yeah, I mean, it’s a good question.

So this idea of a frontier experimental lab—I think maybe I’m a little bit informed by my work at DARPA.

I worked on a DARPA program called ASSIST—AI, I mean, Artificial Social Intelligence for Successful Teams—and that really used this kind of team science approach, where you had 12 different scientific labs all coming together for a moonshot-type effort.

There was that kind of idea of, we don’t really know how to work together exactly, but we’re going to figure it out.

And in the process of shooting for the moon, we’re hopefully going to build all these shared assets and knowledge around how to do this type of work better.

So I guess, in my mind, it’s kind of like: could we create a moonshot for collective intelligence, where collective intelligence is really the engine—and the goal was trying to, for example, end extreme poverty, or reach some scale of ecosystem conservation globally through interspecies money?

Or—pick your SDG issue.

Could we do a collective intelligence moonshot for that issue?

And in that process, what could we build together in terms of shared assets and infrastructure that would last beyond that one moonshot, and equip us with the ingredients we need to do other moonshots?

Ross: Yeah, well, again, going back to the feedback loops—of what you learn from the action in order to be able to inform and improve your actions beyond that.

Jacob: Exactly, yeah.

And I think the key ingredients here are really taking seriously what we’ve built now in terms of collective intelligence. It is a really powerful, transdisciplinary scientific infrastructure.

And I think taking that really seriously, and drawing on the collective intelligence of that community to inform, to create evidence and theories that can inform applications. And then running that around.

I think what I discovered at Human Tech Week with you, Ross, is this idea that there’s a lot of entrepreneurial energy—and also capital as well.

I think a lot of investors really want to put their money where their mouths are on these issues.

So it feels like it’s not just kind of an academic project anymore. It’s really something that could go beyond that.

So that’s kind of time for collective intelligence. We need to get these communities and constituencies working together and build a federation of folks who are all interested in a similar outcome.

Ross: Yeah, yeah. The potential is extraordinary.

And so, you know, there’s a lot going on—not all of it good—these days, but there’s a lot of potential for us to work together.

And again, there’s amplifying positive intent, which is part of what I was sharing at Human Tech Week.

I was saying, what is our intention? How can we amplify that positive intention, which is obviously what you are doing in spades.

So how can people find out more about your work and everything which you’ve been talking about?

Jacob: Well, most of my work is on my expert page on Brookings.

I’m here at the Center for Sustainable Development at Brookings, and I hope I’ll be putting out more ideas on these topics in the coming months.

I’ll be mainly on LinkedIn, sharing those around too.

Ross: Fantastic. Love what you’re doing.

Yeah—and yeah, it’s fun. It’s fantastic. So really, really glad you’re doing that.

Thank you for sharing, and hopefully there’s some inspiration in there for some of our listeners to follow similar paths.

Jacob: Thanks, Ross. I appreciate your time. This has been fun.