April 22, 2026

Michael Gebert on designing freedom, human self-determination, cognitive sovereignty, and systems of agency (AC Ep40)

“Freedom no longer exists outside the systems, and it depends on the design. Coming back to the design, it’s about understanding that we need to distinguish between intelligent systems and agency.”

–Dr Michael Gebert

Robert Scoble

About Dr Michael Gebert

Dr Michael Gebert is Chairman of the European Blockchain Association and co-founder of AI Expert Forum. He works at the intersection of artificial intelligence, digital sovereignty, and institutional responsibility. His book 2079 – Designing Freedom is just out.

Website:

2079.life

LinkedIn Profile:

Dr Michael Gebert

What you will learn

  • How the concept of freedom extends beyond politics and economics to personal agency in an AI-driven world
  • Why cognitive sovereignty is essential for maintaining individual responsibility and accountability as intelligent systems become more pervasive
  • The shift from making decisions ourselves to designing the frameworks and conditions for decision-making with AI involvement
  • How to distinguish optimization from true human empowerment when integrating AI tools into personal and organizational life
  • Practical routines and metacognitive strategies for individuals to retain agency when collaborating with large language models and intelligent systems
  • Why organizational leaders must prioritize cognitive sovereignty and human potential early in AI deployment, not just technical efficiency
  • Insights into the challenges and importance of embedding frameworks for freedom and cognitive sovereignty within corporate, governmental, and policy structures
  • The critical need for ambassadors of freedom within institutions to promote reflection, ongoing discussion, and the integration of responsible AI practices across all levels

Episode Resources

Transcript

Ross Dawson: Michael. It is awesome to have you on the show.

Michael Gebert: Hey, great to be on the show. Thanks for having me.

Ross Dawson: So we connected first, probably around 15 years ago, and we were both involved in crowds, creating value from many people. And I think, you know, there’s one of the interesting points now is, I guess, you know, we still live in a world of many people. We’re trying to create collective value. AI is laid over that.

So it’s interesting to see that journey from where we’ve come to where we are today.

Michael Gebert: Absolutely, and I really remember visually when we first had contact about this very exciting topic of crowdsourcing and empowerment of the crowd, and really making people believe, not only in themselves, but really in communities. And therefore, not only strengths in terms of crowdfunding, crowd investing, their financial gains, but also being empowered in what they do. And this is a very fundamental, I would say, even a right for humanity to reflect on and do that.

I think the methodology and technology back then helped a lot. And to be honest, I’m still partly involved in some of those efforts. Even the big crowdfunding platforms, also here in Europe and in Germany, are vital and really active. Of course, not in that dramatic media shift hype that we experienced, but they’re still there, and it proves that it’s a concept that should stay.

Ross Dawson: Yep, absolutely. You know, there’s obviously collective intelligence, amongst other facets. But this goes to, I think, the frame of your new book, 2079, Designing Freedom. So freedom is an interesting word, and something which I hope we all aspire to.

Michael Gebert: Yeah, you know, freedom, of course, is one of those very multifaceted words, right? It could be translated in a political context. It could be translated in an economic concept, meaning monetary-wise. It could be translated—and this is my translation—in a very personal, one-to-one reflection about how do I as a human being see myself in that surrounding, bombarded not only by information but by intelligent systems, basically AI as we describe them, and all that is behind those systems.

Ross Dawson: So there’s a few things I want to dig into here. And I guess there’s another word there: designing. Obviously, at a societal infrastructure layer, we want to be able to design the systems whereby we can all individually have that freedom of choice in how we live our lives.

Michael Gebert: Yeah, and not always, I would say, looking at the world geopolitically, of course, there is sometimes no choice. And if you are able to generate those choices, first of all by understanding how to design them, that’s a very good first step. So when I wrote the book, the prior part was basically a research paper I did, a small research paper also on ResearchGate. This is the foundation where I started thinking and reflecting. Basically, the core there is about a question that I think is becoming unavoidable now and for the future.

The question is: if more and more cognition or judgment and action are delegated to intelligent systems, what has to be true for human beings in order to remain genuinely free? So the book is really about freedom, agency, responsibility, and at the end, about belonging in a world of increasingly disruptive intelligence.

Ross Dawson: Yeah, yeah. So the word agency is obviously very much of the moment, in lots of ways. But I think human agency is absolutely critical. One of the central things you lay out in the paper, which I think is really, as you were saying a moment ago, is on everyone’s minds. You’re saying this idea of agency used to be about making decisions, whereas now, as you describe it, agency is shifting to authoring the conditions for decision making. So we’re not necessarily making the decisions ourselves, but we do control and guide the conditions, the context, or the structures for decisions so that we retain responsibility and accountability, and those decisions are the ones we would want. So how do we do that?

Michael Gebert: Yeah, you know, the question before asking how is really to understand under what conditions do human beings remain authors of their lives when more and more of those decisions are shaped by, as you say, agency systems or whatever name they go by, whether fancy, new, or already existent. So the how—and it’s not about lifting a secret—is about going back to cognition and having that cognitive intelligence and cognitive roots, which are in us, but which, over the years—and you reflected on the last 15 years, especially the generation after 2008, meaning after the iPhone—have lost large parts of that ability, which is very human.

So it’s not really a reshaping or something new. It’s also not a book advising how to; it is really a finger going up and saying, people, please remember that the deeper question is under what conditions do human beings remain genuinely free when more and more cognition, judgment, and action is to be owned back and not delegated to the systems. This is, of course, very formal in the need and in the demand, but especially, as you mentioned, when laying it out into organizations or government structures, it is hardcore policy and hardcore principle. You can write a lot of things in your genuine AI policies, but what I see right now is that in reality, first of all, nobody’s really reading them in depth. Secondly, there is really no reflection point on this cognition, judgment, and delegation. Therefore, this is really prior before any interest in how-to in terms of technology and what LLM to choose. This is really prior—it’s day zero—when you think about what’s going on, and when you think about how to position yourself, your company, and your team in there. Then this is the next step of thinking.

Ross Dawson: So I want to come back to that, but I think one of the phrases you use is cognitive sovereignty, and this is in a context where one of the most shared papers recently is around cognitive surrender. Cognitive sovereignty is the opposite of cognitive surrender. But the reality is that in interacting with LLMs, it does change our cognition.

Michael Gebert: As long as we, yeah, as long as we delegate cognition, basically. The auto effect is—

Ross Dawson: Conversation with a human changes our cognition too, and I think we need to recognize that. So it’s not just conversing with LLMs. Conversing with a human changes the way we think, which is a good thing because we’re getting more diverse opinions. But obviously, LLMs are not humans, and while possibly that interaction could enhance our thinking, if we get some great ideas and different perspectives from an LLM, then we’re still retaining cognitive sovereignty. So let’s frame this: how do we as individuals get to cognitive sovereignty? What does that look like?

Michael Gebert: Yeah. So first of all, I think we need to understand that when we delegate cognition to an AI, we redesign responsibility. This is undisputably non-negotiable. This is a fact. When you compare it to a human interaction, there is no default responsibility redesign necessary. It’s a reflection point, it’s a discussion. If it’s a good conversation, it’s uplifting for both ends. You go out of this conversation and you have, yeah, uplifted cognition.

Surrendering cognition, as you said, is a very factual statement that brings a lot of views, but it’s basically raising the white flag and saying, I surrender. What I say is, no, it’s not time to surrender. It’s time to appreciate, and it is time to understand that freedom no longer exists outside the systems, and it depends on the design. Coming back to the design, it’s about understanding that we need to distinguish between intelligent systems and agency. We need to separate the capacity for governance. Therefore, we should distinguish between formal freedom and substantive freedom. The difference there is that there are two parts: assistance and substitution. Understanding that there is a very important difference, and really feeling that difference personally with input, makes it powerful. When we think about AI and all those systems, we often confuse optimization with empowerment, and this is one of those very dangerous paths.

Even, you know, you are very active on LinkedIn, I’m a little bit active on LinkedIn, and we see all those posts. To be honest, I would say since the start of ChatGPT and all the other LLM models, 80–90% of those posts and comments are now AI-driven, and you see it, you read it, once you’ve been longer on those platforms. Therefore, people think they feel empowered, but it is not empowerment. It is maybe optimization, but it’s not a reflection point. Coming back to your core question of cognitive sovereignty, cognitive sovereignty would be really going back and abstracting and saying, all right, AI can absolutely expand human possibility, but it is hopefully about human potential and not about completely outsourcing and empowering the systems.

Ross Dawson: So, so what? Let’s just—what does an individual do when they’re working with an LLM? What are the practices that enable them to retain cognitive sovereignty?

Michael Gebert: Yeah, I think, first of all—and this is, of course, a lot of work—every output of any system is a suggestion. Treat it as a suggestion. Compare it to a conversation: if you have a conversation with a very wise person, very reflective, very well known, normally you don’t instantly believe what’s coming out of their mouth. It depends, of course, on your dependency on that person, but normally, you reflect.

What we see right now is a dramatic shift towards instant reputation and instant recognition of AI output. Even though I’m not a skeptic about augmentation, I’m skeptical about unexamined delegation. That means there is human flourishing everywhere possible, but it does not emerge automatically from capacity. This is the reflection point, and it is, as I said, not easy. It’s a routine. It’s basically a self-delegated routine, saying, all right, this is the output, that’s interesting. Maybe it’s misleading. Maybe it is another opinion. Maybe it really substitutes my argumentation. It feels like empowerment, but at most it’s optimization.

Ross Dawson: So, you know, obviously this requires that metacognition, as in, to be aware of your own thinking processes, individually and with the machines and with others, and at which point you can start to observe and reflect.

Michael Gebert: It’s, you know, Ross, to be honest, it’s hard work. Because in the daily life, for a regular person at work, there’s time pressure, social pressure, work pressure—there’s a lot of pressure. The core motivation for most companies is efficiency: to integrate AI and AI systems to be faster, easier, leaner, to make more profit. So the human factor is not in the center. We learned that also from crowdsourcing and crowd intelligence. My PhD about crowdsourcing integration in companies many years ago was about the same reflection: once people have those pressure points triggered, then the reflection within that, that is needed as we talked about, goes down massively.

So the things that are coming now, historically and consequentially, is that the whole AI should not be a technological footnote. It should be really a core issue, to integrate that cognitive sovereignty, and out of that, basically the designing process—what I call now freedom—is ongoing. Because it’s kind of then on auto-shift at some point. But really, there are a lot of stakes that become reasonable here in the Western civilization and in our civilization. So it’s not about tools. The point is at which a tool becomes an environment. This is really what I think a lot about, and it is mind-blowing on the one hand, and on the other hand, really frightening to see, as you say, also the opposite that is happening.

Ross Dawson: Yep, yep. So we’ll come back to that. We’ve still been talking about, in many ways, these decision structures. So, I guess, in an organization, let’s say a head of transformation or CEO says, “Okay, we need to move to what I call humans plus AI decisions,” where humans are involved and AI is involved, and we get to decisions that may be better, faster, cheaper, but also still retain governance, meeting your ethical and compliance requirements, and that the humans are accountable. Of course, there are many types of decisions, and so that will play out in different ways across different types of decisions. But what is the process for just thinking through and implementing those decision structures or conditions whereby you can have better decisions while still maintaining that control or freedom, as well as accountability?

Michael Gebert: Yeah, first of all, I think the real leadership challenge is not just to deploy, right? It’s about preserving agency while doing so. This is the critical factor. I don’t know if you can recall in history, but from my understanding, it’s the first time that we have this hyper-integration of AI usage in both private and commercial business environments. There is no real cut, meaning that the person, the human, is using AI systems privately—shopping lists, optimization, planning, automation, personal agents—and it’s used in the company. Therefore, two things should happen structurally.

First of all, the reflection on how to integrate cognitive sovereignty has to be ramped up, learned, taught, and really developed within the organization. Optimal would be beforehand, but to be realistic, while deploying AI with that knowledge, this is a training program. So how is it? It is a training program. I know that you are a fan and you have superb pictorials and structural views that you post on LinkedIn, and this would be a perfect example of producing such a roadmap, basically saying, “All right, these are the basic steps. You may not be able to follow them 100%, but just to give you a core idea of step 1, 2, 3,” and then follow the roadmap, a framework. But now, with the difference that as it is so integrated, the person understanding the framework can reflect the framework also for their private lives, meaning with their children, godchildren, partners. This is why it’s so interesting, because it’s core learning.

Right? So basically—and I know you have a couple of those already in existence—so it’s kind of the next step. What should come out, or should be produced, is a combination, saying, “Okay, this is the addition to that framework, in combination with that framework, understanding what myself and others try to explain here.”

Ross Dawson: Fantastic. I interrupted you, and you were at the point of saying, okay, this training or these frameworks are assisting people to have agency in this process. Let’s come back to that. You’re helping people to frame or to have agency themselves, but this is part of a process where you are starting to bring AI into decisions. So where does that take us?

Michael Gebert: It takes us to a very fragile and really hard-to-judge state where we are at the moment. I just can really reflect on my experience right now with training and with conversations within organizations—not just because maybe the book is a foundation, but because I’ve been doing that for the last 30 years. Having that reflection point, I would say it has never been easy to have a disruptive framework implemented in a running ship. The company is moving. There are goals. There are different goals. There may be goals that are totally the opposite to what the framework says. But realism kicks in very easily. My first door opener is saying, if you as a company want in a possible future to integrate human potential into your upcoming company framework, then we have to talk and put a framework about cognitive sovereignty and understanding of systems of agency into your existing and upcoming, mediated, intelligent systems. Otherwise, if that is not understood, then we will have a dependency on decision, which is not only bad for your employees, but in the medium term, maybe even in the short term, depending on where you integrate the AI systems, can be very destructive for the whole company. This understanding is a massive shift from a regular decision, which is mostly still coming out of the technical department—meaning the CTO or the CIO are fascinated by the possibilities, they report it to the board, the board sees efficiency, and out of that, a testing period and pilots are developed, and then the rollouts begin. Which is all fine in the old thinking, because it doesn’t price in what’s happening on the cognitive and human potential side. So it’s an additional card that has to be integrated very early on.

Ross Dawson: So are there any organizations that you have seen who are doing any of this well, or even just a little bit well, in terms of even just taking this framing into how they’re trying to approach it?

Michael Gebert: You know, in general, I would say there are a couple. I have one from a city company who is worldwide active, who is doing, on a department level, a very good job. Generally, overall, the whole company is fragmented, and therefore decision making is fragmented. Therefore I cannot really judge on how they are doing as a whole, as a company.

Ross Dawson: Just on the department. If they were doing it well, what were they doing?

Michael Gebert: In that specific company, they understood—and maybe that is the interesting part—they understood relatively early, due to the fact that they are coming from a very human-side factor of product, meaning pharmaceuticals. Because whatever you take in, a pharmaceutical elevates or alters your human condition, and therefore they have this sensitivity for the topic very early on, which made it very helpful to attract attention and also understanding within the leadership and decision making to integrate, in the development and R&D departments for future potential aids and medicals, that thinking. Which I think is perfect and fascinating and it fits, but the foundation was a preset of basic understanding which is bounded to the product, or bounded to the industry itself. The other one was automotive. You know, I’m in Munich, so there are, and in Germany, there are still a couple of automotive companies left, and they understand that there is a big shift on robotics, FSD, and there is the other shift of human-centric driving. But still, in the car is a human person, so somebody has to be transported from A to B. The department there on AI and future development understands this cognitive sovereignty also very well, because their approach is coming from a very human angle. What I want to say is, it benefits a lot once you have that framework integrated into existing acceptance of the importance of the topic. What I found is that especially in the financial sector, it is, at the moment, not really recognized. It’s very product-focused, very output-focused, very efficiency-focused. It’s not really focused on preservation of human intelligence and reflection and agency, and therefore, you know, designing their cognitive sovereignty—aka freedom. I think that will fall back massively, but we will see. This is just a reflection point now in Europe, or especially in Western Europe, like Germany. But the similarities appear to be there on a global scale, because the systems tend to be very similar that are being used.

Ross Dawson: So which kind of just takes us to round out, the big picture. Your book is for, amongst others, policymakers, and we’ve talked about the individual and organizational level. So now pulling it up to the macro level, as those who are creating the policies for governments and supranational organizations and so on, what are just a few core lessons or insights for how we design policy to enable human freedom, agency, and dignity?

Michael Gebert: Yeah, maybe I’ll give you some really concrete examples, because I presented the book this year in Davos at the World Economic Forum. I had a reading session there. Of course, it’s kind of a competition between giants, so I was humbled to have a couple of people there, but not as many as I wished, to be honest. Still, I was there talking to a couple of those macro-level, high-end policymakers, and what they said is very similar to what I heard back in my crowdsourcing research: they have the data, they know the importance, they sometimes even have a hint of a framework to do it. However, inside the rollout pattern and inside the organizations themselves, there are a lot of—not risks, but—hindering mechanisms that tend to prevent an instant understanding. What they sometimes do—and this was a gentleman, interestingly enough, from a country in Africa—he said, “We need to have, like in the old days, ambassadors of freedom within the organization at all levels.” Basically, they are the spearheads, they’re the flag keepers and the wisdom keepers, in a very front-end way, understanding the core concept and elevating the rest of the crowd, of the team, to a level where they are open to discuss, understand, and integrate. This, I think, was one of the most hands-on approaches I’ve heard, because all the others about training and retraining and certification—it’s all good, but it doesn’t really guarantee integration.

Ross Dawson: Yeah, yeah. So, Michael, where can people go to find more about your work and your book?

Michael Gebert: So, basically, if you have a ResearchGate account, the free prelude—the research there—can be downloaded for free. It’s a PDF. I would be happy to extend or expand it. If there are researchers or organizations out there that want to use that as a foundation or expand it to their special needs, I’m more than happy to assist. The book itself is at 2079.life. It’s a dedicated website for it, and you can buy it, of course, online or from any dealer that you want. Interestingly, with that book, I really have lifted it to a hardcover version—not that I’m old school, but I think there is something about seeing it physically, marking it. I’ve seen it now, when I did the promotion, I gave it to a couple of people who normally don’t really read so much because they have audiobooks or PDFs and a lot of work but no time. But with that book, they came back to me and made photos where they really underlined things, marked it, put their reflection points. I think this is what this book is about, because it’s not a 300-plus page book. It’s quite condensed, but it should bring you, in basically every paragraph, to rethinking about your approach to the topic. When that is reached, the book is 100% where I want it to be. It’s definitely not a how-to book—how to be great, or “in 30 minutes you’re an AI prompt magician,” or anything like that. It’s quite the opposite. It really goes way deeper. A lot of books kind of flag it at some point, but not in that condensed area. As you may have read, there’s no version 4.0. When I started thinking about it, it was COVID times, and the first version I gave to you has nothing to do with the current version. The first version was a blue pill, red pill approach—really, there will be a dystopian version and there will be a freedom version. Over the years, now in the fourth year after COVID, with all that’s happened on the technology side, geopolitical, and human side, this is the output now, a development. So the book itself is not a still space; it is a development space.

Ross Dawson: Fantastic. Well, thank you so much for your time and your insights on the call today and the very important work, because obviously freedom is something which we need to work on. Thank you, Michael.

Michael Gebert: I think that’s the core. Thank you so much, Ross. And have a great day. Thanks for having me.