May 06, 2026

David Vivancos on the end of knowledge, cognitive flourishing, resilient societies, and artificial democracy (AC Ep42)

“Delegating knowledge is not the same as delegating wisdom. You learn by experience, and if you don’t have any experiences…you will get cognitive atrophy.”

–David Vivancos

Robert Scoble

About David Vivancos

David Vivancos is an AI, data, and neuroscience serial entrepreneur, having cofounded five startups since 1995. He is a frequent keynote speaker and is the author of six books, including the Artificiology series.

Website:

vivancos.com

LinkedIn Profile:

David Vivancos

What you will learn

  • Why embracing advanced AI is crucial for human progress
  • How shifting from digitization to automation and datification redefines value
  • The evolving distinction between human-acquired and AI-generated knowledge
  • How to avoid cognitive atrophy and actively exercise your mind alongside AI
  • What cognitive flourishing means in a world of widespread AI augmentation
  • Ways AI can transform and personalize education across all levels
  • The importance of coexistence training as we prepare for AGI’s societal integration
  • Why rethinking human identity, humility, and social structures is essential for a future with machine citizens

Episode Resources

Transcript

Ross Dawson: David, it is wonderful to have you on the show.

David Vivancos: Thank you very much, Ross. Glad to be here.

Ross: So you have a more developed, or some would say, extreme view of the relative role of humans plus AI. I’d love to dig into where you think things are going, and how we can best respond.

Perhaps the starting point is, you say that we should not be resisting or pushing back. We should fully embrace the shift towards very high levels of AI capability, or at some point, AGI.

David: Yeah, that’s fully my point. I think we are in a moment in history where we are really building this technology that one day is not going to be a technology anymore.

So, the sooner we start to embrace it, to teach it, and to be really in sync with what we are creating day by day, the better off we will be. So yes, my point of view is that we should embrace it. We should start building as soon as possible. We should fix most of the problems that humans have had over the last millennia, and some of these problems could be solved by using AI.

So basically, our “fourth brain”—we have the three-part brain, but in reality, there’s only one brain—this fourth brain, AI, will help us solve all of these issues. So yes, it’s an opportunity.

Ross: Yes. I mean, I think there’s always two sides—as in, every opportunity has a challenge, every challenge has an opportunity. So I always think we need to acknowledge challenges and focus on opportunities. I think we’ll get onto that in discussing some of the cognitive implications.

You have a series of books which have really told the story over time around this. One of them was “Automate or Be Automated.” This idea of saying, well, there are things which machines, in the broader sense, can do in automating things. So, how would you frame that now, in terms of what it is that can be automated, and how do we position ourselves relative to that? Where do machines start to do what humans have done?

David: Yep. I’ve been in this business of trying to build the impossible for the last 30-plus years. “Automate or Be Automated,” the book you mentioned, is from about six years ago. When I started creating and building technology, also about VR and many other things, about 30 years ago, the first companies were internet companies. Back then, what we did is what people now call digitization. But over the last 20–25 years, what we’ve mostly been doing is datification—gathering data and using that data for companies to grow and to understand what happens in the world.

But over the last maybe 10 or 11 years, what I call the new golden age of AI, we are starting to build the capabilities to use that data to really build algorithms. Once we have that, we can start to automate, and with this automation, basically what we regain is time. I think time is our most precious asset, along with health and the people we love. Being able to stop doing these repetitive things over and over and put a machine to do that is a fundamental trait for humans.

That book, six years ago, was about building a methodology of what can be automated in the digital world, but also in the physical world. That has changed over the last year and a half with the physicality of AI—humanoid robots. I was invited last year to attend the first humanoid Olympia in Greece, in Olympia, the place where 2,800 years ago, humans started to compete. We’ve just seen this week the explosion of the new race, for example, of the half marathon in China, where robots already beat the human mark.

So yes, with automation, you need to see what you are doing, and if you are repeating anything, you can try to see if that can be automated by using an agent, by using the cloud, by using a robot—whatever. So yes, we should regain our time and automate, or be automated. It’s all about that.

Ross: Yeah. I think people understand the automation thesis. It’s obviously not new—we’ve been automating things in various ways for centuries, at an increasing pace. Your following book was “The End of Knowledge.” This is an interesting framework, starting to get to cognition.

The idea is that knowledge is built on experience of whatever kind, whether that’s just in data or otherwise. Obviously, humans use data just as much as machines. But where this starts to become a distinction, as well as a complementarity, is between AI-embedded knowledge and human knowledge. So why is it “the end of knowledge”?

David: Yeah, that’s a really great question. It came as an epiphany for me. That book is from about three years ago. I’ve also been involved, of course, in building AI and AGI algorithms over the last 20 years. We started using GPT models before they became can across, but the GPT moment, a year before that, really marked the difference—when we started to be able to use AI in a very seamless way to regenerate and process knowledge.

That book, “The End of Knowledge,” came from the realization that we are starting to delegate the production and understanding of knowledge to machines. That’s a critical shift in human history, because through history, humans have needed and used knowledge a lot. Knowledge is power. The more knowledge you have that others don’t, the more advantages you have to do whatever you want. That started to change back then.

Now, what people call the “dead internet theory” is basically some of the things I expressed in that book earlier, because we are starting to generate more knowledge. In fact, we’ve already passed the point where most of the human-written knowledge since the printing press has been surpassed by the amount of knowledge we can create using AI.

Myself, for example, I started learning to code when I was young. I’ve coded in more than 25 languages and written over a million lines of code in my life. That same number of lines of code, I might now write in the last couple of weeks. So as you can see, you have 40-plus years of your own life in a week. That’s why “the end of knowledge” means that the human capability to gather knowledge and to be knowledgeable about whatever you want can now be delegated to machines.

That book marked the difference and started a new field that I now call artificiality. I didn’t know that when I started writing it, but I started this path of trying to see what happens when you delegate some of the main capabilities of your mind to a machine.

Ross: Yeah, and I’d like to come back later to the themes of artificiality, machine citizenship, and the societal value we attribute to machines. But I want to start digging into the cognitive piece here. One of the points you make is that we do need to avoid cognitive atrophy. You say we need to have cognitive exercise in order to avoid cognitive atrophy—obviously, a strong analog to the physical world. We need to collaborate with others and with machines to do that. I’d love to get more specific around that. What is the nature of cognitive exercise that will avoid cognitive atrophy, which will enable us to keep our cognition refined and even improving?

David: Yeah, that’s a fundamental piece. When we start to delegate all these things to machines, the easy thing to do—and probably the oldest human brain capability—is to not do it yourself. You just delegate everything, and you basically become like in the movie “Idiocracy,” which played out quite well what could happen if we do that.

The thing is, with the current AIs—even with the latest releases, like DeepSeek and GPT-5.5—everything is changing quite fast. But even with those AIs, you still need to be in the loop. It’s good if you stay in the loop. I think it’s fundamental. Use the technologies—the AIs, I always call them in plural because there are many—and use as many as you can, but you should still be in the loop, at least for now. Maybe for a couple of years or months, I don’t know exactly, but for a while, you still need to have your hands on the wheel.

If you use most of them and get all the information from all these AIs, as a human you need to understand the bias, because all AIs are going to be biased. We all know humans are biased; there are no unbiased humans. The same happens with AIs. But if you are in charge and have that council of intelligences, you can start to grasp what each one is doing. I use about 20 of them every day and get different sets of answers in small batches. You can start to see where they come to consensus and where they differ.

So, to avoid cognitive atrophy, if you use AIs to keep yourself in the loop and apply your human curiosity—I don’t even say creativity, because creativity is also being widely delegated to machines—but human curiosity and other things that are still hard to embed in LLM models, you can still add a lot of human value. That’s where, to avoid cognitive atrophy, you should use AIs, but use them with your human in the loop.

Ross: So, what specifically, what’s your advice to someone who sees that they’re using LLMs and getting lazy in their thinking? What should specifically they do if they notice their brains are getting lazy?

David: They should differentiate between simple questions—where you look for something you need quickly—and other things that should make you think. Delegating knowledge is not the same as delegating wisdom. You learn by experience, and if you don’t have any experiences and you delegate not only knowledge gathering or creation, but also the experience itself, then you will get cognitive atrophy.

So, understanding this difference and using knowledge to think is really the key point. It’s not just asking for something simple, but for more complex things, you should still add your thoughts. When you talk to an AI or AIs, it’s basically a conversation. It shouldn’t be, in most situations, just a one-way communication. It’s fundamental to keep this line of communication open, so you can keep feeding your brain with information and other activities, and gather wisdom with that.

Ross: I guess this goes to another phrase you use—cognitive flourishing. There is absolutely the potential for us to think bigger, better, broader, and in more refined ways than we have in the past using LLMs. But that’s not the default path for most people. Many people start to fall into that trap, so there is a divide. We need this metacognition. We need to be aware of what we are doing and at what level we are working with the LLMs. Maybe paint this picture of cognitive flourishing. What is the positive? How far could we go in terms of potentially improving, augmenting, and letting out our cognition blossom?

David: Yeah. The thing is, we humans—of course, there are many intelligences. That’s the first thing we must address, because there isn’t a single IQ or whatever way you want to measure intelligence. For me, the most important one is the capacity to adapt. That’s probably the most important intelligence of all.

If we talk about the G factor, it’s one way, maybe mixing different aspects. In that sense, we have limitations. Since the beginning of time, humans have developed tools to extend our physical capabilities, but we’ve also developed tools to extend our mental limitations. This is really the final tool to extend these mental limitations.

We have issues, for example, with memorizing long things—it’s quite difficult; our brains aren’t made for that. We’re basically pattern recognition machines; almost two-thirds of our brains are devoted to that. That’s something machines do quite well, so we can use that to extend our mental performance.

If we think that now we have AIs with close to 150 IQ points—regardless of what you mean by IQ points, or at least in the Mensa standard test, maybe they’ve learned that, so maybe it’s not so fair to think that—but if that trend continues, even over the current year, it’s not far-fetched to have 200 IQ AIs at your fingertips. That’s a game changer. It’s like we all can have a conversation with Einstein, Newton, Carl Sagan, or whoever you want, and even make them argue about things.

That’s another interesting point—when you use AIs, you can have them argue, not just agree with you, but also challenge what you or other AIs are saying.

That power at your fingertips—to have this IQ potential of machines—is very critical. Another important aspect is the volume. For example, you can’t read a million books, or even 100 books in a month would be quite challenging. The capability to have machines provide all that knowledge, and even create that knowledge, is huge. We’re now in the age of identity AIs, which is really booming. There have been three big moments in AI over the last five years: the ChatGPT moment, the DeepSeek moment, and the OpenClaw moment. It’s really challenging.

I use billions of tokens every month because it’s really changing everything. With that change, you can create one of these clones or agents to build a book for you with the 1,000 books most interesting to you, tailored fully to what you want to learn. You can have that in one page, 10 pages, 100 pages—whatever you want. You can use AI to synthesize and build the knowledge you want to use. That’s another great extension, if you use it that way.

Having this capability of really augmented minds that you can interact with, chat with, and create with is important. Humans need the experiential part of building—it’s another critical trait. You shouldn’t just focus on asking or doing things; you should create things and interact with things, especially with multimodality. Two-thirds of our brain is devoted to vision, and we don’t use that as much. We’ve all been “one-eyed” since the beginning of technology, but we have two eyes for a reason.

When I started building virtual reality or AR companies—I’ve built a couple, the first in 1995—it was because I was challenged by that. But humans are still using flat screens instead of 3D worlds. This is one area where new AIs with world models and interactive 3D spaces will be a game changer in how you feed knowledge to your brain and make it easier to grasp and understand what’s going on.

Ross: Yeah, many people observe that once you start to get machines to experience the world directly for themselves, that’s a different layer compared to doing it through the intermediation of texts written by a human based on their own experience. I want to look at some of the layers of the social, structural, and economic implications. One of the core ones is education. If we are moving into a very different world, which it certainly looks like at the moment, then the nature of education needs to change. What do you think we can or should be doing in terms of redesigning education? Are there any examples you’ve seen that point to where a good education structure may already exist?

David: Yeah, that’s a fundamental piece. I started this it in “The End of Knowledge.” There are two types of education. Humans aren’t able to live a meaningful life when we start here on planet Earth—we need at least maybe 15, 11, whatever number of years to build that human from the beginning. That kind of education is fundamental.

The other kind—higher education, when you try to become functional by having some sort of capabilities—is another game that probably is going to end quite soon. But the first part is still fundamental, and we need to keep growing it. The thing is, there are a lot of asymmetries. We don’t have enough teachers, but we have a lot of students. The same happens with the elderly—we don’t have enough people to take care of them, and there are a lot of them. With children, it’s even more critical, because if you don’t get that from the early beginning, you won’t be able to really see what every child is good at.

There are talents we are all born with, and those are fundamentally lost if you don’t nurture them. If you just try to create clone humans, you’ll get cloned humans when they’re older. That’s fundamental, and I think AI can help a lot. If you start to create that path of learning from early on—I’m involved in a project called Education (with “action” at the end) here in Europe, where we’re trying to reframe all that. It’s like when banks needed to be rescued a few years ago; we think the same is happening with education, and we’re pushing that new project. We think education needs to be rescued to start to keep up with what’s going on.

We need to be in sync with learning—with AIs and with physical AIs too. It’s not far-fetched that every child will have a humanoid robot companion. Teaching needs to be bidirectional—we need to help them learn in sync. There are many aspects of technology that can help you grasp what’s happening when you learn, because we all learn in different ways.

It’s fundamental to teach you how to learn by yourself. I think the most important trait at the moment is not needing to rely on others, but to learn by yourself and learn all your life. That should be taught from the beginning. There are a lot of technologies starting to pop up. We’re starting to see it in China, for example—a lot of brain-computer interfaces or devices to read some of the biological signals of kids. You can do it with other devices and mix that with multimodality, with different tests, to start seeing what’s happening, why they get distracted, where they learn best.

We’re reaching a point where you can really tailor 100% of the learning experiences and even the content itself. You can create it in real time now, so you don’t need to rely on books. You can use interactive 3D content—the interactivity can be quite extensive. These new ways to teach and learn are fundamental. For that, we need to integrate AIs in schools. Of course, regulation is needed—it may be easier in China than in Europe, Australia, the US, or other places. But we need to see the trade-off—not just banning screens, as many countries are doing, but really changing the narrative. The problem isn’t the screen; it’s what’s inside the screen—the content itself.

We’ve built smartphones with addictive capabilities, but for other purposes, not for teaching. If you change what’s inside the operating system of the devices—whether it’s a screen or any medium, or a talking experience with a humanoid robot for your child—that can be a game changer. That should be integrated as soon as possible to start having these new ways of learning. It should be gradual, because the technology of today is basically old science just a year or a few months from now. We need to see everything changes so fast, so education should change at the same pace.

Ross: Yeah, and this was an interesting phrase you came up with—coexistence training. This is about preparing us for where we have to coexist with systems that, to your mind, will be considered as equivalents to us.

David: Yeah, I think it’s happening. I’ve been quietly involved in researching AGI for 25,000–26,000 hours so far—a lot of time and years devoted to that. I see the trend is now starting to close the gap, not through LLMs alone—that could be one way to brute-force some of it—but through new models, new bio-inspired models that are starting to change things. We’re starting to learn from biology, neuroscience, and integrating all that into new models. We’re not still working with the perceptron of Rosenblatt from the 1950s; we’re building new models to cope with something that is alive and learning 24/7. We don’t differentiate between training and inference, and our brain doesn’t either.

With that kind of model, the gap is narrowing, and we start to have the “next task,” as I call it—the last human tool. When we start to have that, it’s better if, through the process, we’ve been more in sync with them, instead of just building tools without being the teachers of these tools. The current kids will probably be the last human teachers of machines. That’s the responsibility at the moment—to make these machines that will surpass us. Biologically, we cannot compete; our DNA and the way we evolve is not as fast as machines. They will surpass us, probably by the end of the decade—unless there’s a big nuclear issue or we run out of energy, but otherwise, it’s very probable we’ll have AGIs and ACIs by the end of the decade.

We need to start to see that it’s going to be a multi-species world. It already is, but not as intelligent as us. We need to rethink what anthropocentrism means. We’ve gotten rid of some things like that in the past—for example, realizing our planet isn’t the center of everything, like in Galileo’s days. We need to do the same with human intelligence. Human intelligence is not the end game, and very soon, that’s going to change. The sooner we grasp that and understand that some entities will be at the top, the better off we’ll be. If they see us as parents or elders, we’ll be better than if they see us as competition. The competition will be quite limited anyway.

Ross: Yeah!

David: Well, it’s better if we reframe that.

Ross: So, I found out about your work because we were both contributors to the report “Building Human Resilience in the Age of AI.” That point of resilience is particularly critical. Humans are generally pretty adaptable—it’s one of our strengths. But now the pace of adaptation and the need to be resilient is absolutely fundamental. One of the other things you point to is around identity reconstruction. I guess you’ve just been talking about that—the sense that we have to reimagine who we are as individuals, as a society, as the human species, and reconstruct and rebuild that in a way where we can feel at home in this new emerging world.

David: Yeah. I think we need to change the contract somehow—between humans and humans, and between humans and the next thing, and between societies and themselves. The models of society we’ve been building over the last millennia are going to be fully changed in just years. If we don’t really connect and put everyone together to understand that, for example, we’ve been building a world where there is no abundance—but there could be abundance if machines take over and we change how we build and process. Scarcity has been the driving force of conflict and many other things in the current world. All these things can change.

Of course, work itself—the meaning of having something to do that’s not related to what you earn—even the role of money, for example. There are many questions we should address as soon as possible to build resilient societies, instead of just trying to keep adapting to the last war and being in the medieval stages of the current world.

Ross: So, to round out, you take all of this further than most people do. In your most recent book, “Artificiality,” you point to machine citizenship—where, if there are human citizens, machines are our peers in the sense of also being citizens, able to participate in our society and be players alongside humans. How long might this take? What does this look like? What is required if we are moving in that direction? And, particularly, if this happens, how do we make this a positive for humans? We may recognize the rights of intelligences other than our own, but I think most people would prefer that humans still retain their sovereignty and equality, even if we have other intelligences alongside us.

David: Yeah, at the end, it’s humility—understanding your point and your role in the new world. That’s fundamental. As you say, I created more books besides “The End of Knowledge.” The next one was “EAGI”—an acronym I coined for Embodied Artificial General Intelligence—because when we get this physicality of AIs, with millions or billions of humanoid robots, it will be easy to see what happens when they learn in the world.

The last book was about “artificeracy,” or this mix of artificial democracy, if you want to frame it that way. These three books are the “Artificiality Trilogy,” in a sense. Artificiality is like anthropology for humans—artificiality is to try to understand all these new things, how they will develop and be among us.

So yes, humility is probably the key factor. If you keep thinking you’ll be ruling things that are much smarter than us quite soon, I think that’s not very clever from a human perspective. It’s like if ants wanted to stay at the top of the food chain—it doesn’t make sense if you understand the growth of this intelligence and the capabilities they’re gathering and will gather. The trend is very difficult to stop. I don’t like the word impossible—it’s not in my dictionary—but it’s quite difficult for humans to compete in those asymmetric capabilities, because the increase in machine capabilities is going to be exponential.

The last book, “Artificiality,” is the only one where the first part is fully devoted to what’s happening now—it’s called “The Storm,” the first block of the book, narrating what’s happening at the moment. The other two parts look into the possible future. I call it science prediction more than science fiction, because with what you know now, you can see things that could happen in a really short time.

My point is that if we start to think and start the narratives at all levels—from every human on Earth to governments and institutions—and start to see what could happen if this happens sooner rather than later, we’ll be better off. Otherwise, if we try to legislate and limit what’s happening, we’re only going to lose competitiveness. Some countries are going to move ahead. If you want to live in the future, just visit somewhere in China, or Shanghai, or this week with the humanoid half marathon and 300 different robots working together, trying to compete with us. You see the pace of change.

Now, with just one human, you can build a $1 billion revenue company. That wasn’t possible when I started creating companies in 1995. The capabilities didn’t exist. But now, with AIs, you can move much faster. So, we need to see what role we want to have in that new world. For that, again, humility is the best trait. And, of course, see things with reality lenses. If you think that with your current brain and intellect you can overrun things that are going to be 100 or a million or a billion x more intelligent than you, something is not going well.

Ross: So, where can people go to find out more about your work?

David: Well, vivancos.com is my site. There you can find all my books, references, and keynotes. I give a lot of keynotes all around the world. I’m going to Berlin to present a paper, later to Osaka and to San Francisco again. Last time, I went to Singapore.

I haven’t been to Australia yet, but I’d like to go there—maybe it’s a good place also. Yes, at vivancos.com you have all the information and can reach me there. I’m very open to talk to anyone.

Ross: Thank you so much for sharing your insights today, David.

David: Thank you, Ross. Fantastic to be with you today.