“Therefore, if you are in strategy today, you have to enrich your shareholders, but also be careful not to harm the planet, not to harm society, and to express your concern for what is called stakeholders.”
–Dominique Turcq
About Dominique Turcq
Dominique Turcq is founder of the Paris-based research and advisory center Boostzone Institute. His roles have included as professor at a number of business schools including INSEAD, head of strategy for major organizations including Manpower, partner at McKinsey & Co, special economic advisor to the French government, and board member of Société Française de Prospective. He is author of 8 books on strategy and the impact of technology.
What you will learn
- How the role of strategy in organizations has shifted from focusing solely on shareholders to considering broader societal and environmental stakeholders
- Why long-term foresight and scenario planning are increasingly critical for effective strategic decisions
- How new legal and societal expectations are reshaping the responsibilities of executives and boards
- The evolving relationship between boards and executive teams as AI advancements introduce new governance challenges and opportunities
- Practical ways generative AI is changing decision-making, communications, and risk management at the board level
- The potential for AI to transform work, skills development, and organizational structures—and the risks of cognitive atrophy from overreliance
- The importance of fostering an “ecology of mind” in organizations to balance technology use, creativity, learning, and collective cognition
- Why ongoing reflection, adaptability, and diverse mental engagement are essential for individuals and leaders amid rapid AI-driven change
Episode Resources
Transcript
Ross Dawson: Dominique, it’s wonderful to have you on the show.
Dominique Turcq: Thank you, Ross. It’s very nice to be invited by you on such a prestigious podcast.
Ross: So you have been working in strategy for a very, very long time, and along that journey, you have recognized the impact of AI before many other people, I suppose. I’d like to start off with that big frame around strategy and how it’s evolving.
Maybe we can come back to the AI piece, but how have you seen the world of strategy evolving over the last decades?
Dominique: Several things have happened in the last two or three decades. First, an anecdote. I was the head of the French Strategic Association, and we closed this association in 2008. You know why? Because we had no members anymore.
In other words, less and less companies had a Chief Strategy Officer. Why? Because people in the executive team or on the board thought they were all good at strategy and didn’t need a strategy officer. The problem is, when you are operational, whichever part of the executive team you are in, you don’t have the mind or the time to look at the long term, therefore to really look at the strategy.
You may be competent at strategy execution, but are you good at strategic planning, at forecasting, at long-term planning and futurology? You’re not, because you don’t have time to do that. So we closed this association, and frankly, it’s very interesting to see that it has not been reborn. We still have very few real Chief Strategy Officers in French companies.
And I’m sure it’s the same all over Europe. I don’t know about the US, but in Europe, we see it everywhere. So to me, that’s a big change.
Another big change is that we have clearly entered, for the last 10 years and for the next 20 years, into a major era of change—a change in paradigm. Until 10 or 20 years ago, let’s say until 2000, the basic paradigm was, by the way, Ricardo’s paradigm of the 19th century. In other words, the Earth has all the resources we need, the Earth can handle all our waste, and all this is free.
Remember Ricardo said the Earth’s resources are free, and we have no limit. Until 2000, that was the thinking. Since 2000 until today, more or less, people have started to realize that, well, some resources are infinite or look infinite, but most resources are finite, and the way the Earth is able to sort our waste is not as good as we thought.
Now we are entering a new paradigm, which will become very clear in the next few years and is very important for strategy. We are entering a finite world. Companies have a sociological role to play, both for the Earth and for society. This is very new. In France, we have a law called the “Loi PACTE”, which changed the legal code of corporations.
Before that, it said a corporation is here to enrich the shareholders, more or less. Now it says, yes, we have to enrich the shareholders, but we also have to take into consideration the impact the corporation has on society and on the environment. It’s a huge legal change.
Therefore, if you are in strategy today, you have to enrich your shareholders, but also be careful not to harm the planet, not to harm society, and to express your concern for what is called stakeholders. This is an interesting part in strategy, because until recently, stakeholders were more or less your employees, your suppliers, your customers. Now, obviously, you also have the environment and society, and even the local place you work in.
If you are in a city where you are the most important employer, you have a relationship with this city, and you are responsible for the health of this city. So it’s a stakeholder. We have a lot of new stakeholders, and I think from a strategy point of view, this has big implications. How do we handle all these stakeholders at the same time, and to which stakeholders should we listen? Because today, most stakeholders are not in the General Assembly. They are not even on the board. So how do we listen to them? How do we respect them? How do we manage our long-term relationship with them? So yes, strategy is changing a lot.
Ross: One of the things you’ve always said over the years is that in order to build effective strategy, you have to have a long-term view. You have to use effective foresight, or, in French, la prospective. And that is a fundamental capability in order to be effective at strategy.
Dominique: Yeah, I always defended that, because I think you can only work with strategy if you have a real long-term view. The issue with a long-term view is several, but one is the complexity, because we don’t have a crystal ball. So we have to understand what will really happen, and therefore what the consequences are. We have to make hypotheses on discrete variables. Continuous variables are okay, in a way.
Discrete variables—you can’t, you have to make scenarios. How will the war in Ukraine unfold? You have to make a scenario; you cannot have a definite idea. So this is a discrete variable.
Continuous variables are almost more interesting because we know we have a certain number of variables, and we know where they go, like population increase—we know where it goes. Climate change—we know where it goes and some of the implications, like we are going to have less water, maybe we are going to have resource issues with rare earths or whatever. Sorry, my cat is disturbing me. The great thing in strategy today is, let’s work on these long-term continuous variables and see how they impact today’s strategy. There are many of them, by the way, but even these variables, I see a lot of Chief Strategy Officers, when they exist, not taking them into consideration.
I’ll give you two or three examples. When you speak about the labor market and the size and distribution of the labor market, very few people realize that we have more and more older people in the labor market. How do we deal with this aging? It’s a real strategic issue, because it means the whole organization will be changed. That’s a very classic example.
Another one is, I had a very interesting meeting recently with people in the agricultural field—cooperatives. These are big companies, and I discussed with them and asked, do you realize that within your warehouses, because of climate change, the temperature might go up to 50 degrees inside? Even if you have 48 outside, it may be 50 inside. What happens at 50 degrees? When you have chemical products stored together, they explode. Therefore, you have to plan how you are going to build your warehouses, how you’re going to change your warehouses.
This is a long-term step. It’s not two years; it’s within the next 10 or 20 years, and we didn’t realize that. So while this is a continuous variable—we know we are going to have a temperature increase for sure, and we know very closely what will happen—we have to plan for it. So this, population, and a few others, we can plan for, and few people do it today. That’s why, Ross, you’re right. I always wanted to work on the long term and its implication on the short term.
Ross: One of the very interesting things—there was a great book, or book title, particularly by Peter Schwartz, “Inevitable Surprises,” where you can say, well, yes, we know this is going to happen. It’s just a question of how long it’s going to take. And they are still surprises to most people, but we can map this out and start to plan ahead. And that’s what strategy is: to be able to plan ahead.
Dominique: It’s more futurology, prospective, than immediate strategy, because some of this stuff doesn’t have an immediate impact. For instance, what I said about warehouses—if you build a warehouse today, you have absolutely to take this into consideration. Now, if you have existing warehouses, you may think, okay, I will wait until you have to scrap these warehouses, then it will be a stranded asset. Okay, I accept the notion of a stranded asset, but it’s a different strategy. You see my point, yes, but I think I like the title “Inevitable Surprises.” I didn’t see that book. I will check on it.
Ross: So let’s move on to AI. There are many, many angles we could take, but I’d like to start with AI and strategy. Of course, there are two broad things: analytic AI, to be able to look at machine learning and trend analysis and picking up data from internally and externally—that’s one domain. But also generative AI, which is a cognitive complement, and where boards and, well, some boards and some executive teams have been able to use generative AI as a sounding board to provide some frameworks and so on. So just as a starting point, how do you see, particularly, the rise of generative AI impacting the practice of strategy?
Dominique: We have a lot of issues with AI, in particular with generative AI. We just published a booklet for board members: what does it mean for them? It has several implications. Some boards are thinking of using generative AI in one way or another. And why not, by the way? As long as you don’t name an AI a board member, as long as you don’t do that, it’s fine. Everything is okay.
But an interesting part here, and it’s linked to strategy, is how much AI will change the relationship between management, the executive team, and the board. This is very important, because suddenly a feature—a technology—which was traditionally used by management, now comes up to the board, because the board has to know: how is it used? Who is going to use it? What are the challenges it leads to? Do we have ethical problems? Do we have data problems? Do we have algorithm bias problems? As a board member, you need to know about it. If you don’t, you may have huge issues, especially reputation issues, but even maybe strategic mistakes. So suddenly, it’s very interesting to see something which was a technical issue related to the executive team has suddenly become a board issue.
You have another one, by the way, which is parallel to that: communication. Communication was always an executive team issue. But suddenly, if you go too far into lobbying, then it becomes a board issue, because you put the reputation of the company at risk, and the boards want to know not only how you communicate internally or externally, but how much risk you as a manager present to the company if you do lobbying which may backfire. That’s very interesting. There are new responsibilities for the board, which we didn’t have 10 years ago. It’s really new.
Ross: So I suppose one of the things you’re pointing to there is the depth and the breadth of the governance issues around AI change the relationship between the board and the executive, in that more of these, what have been technological frames, start to become the province of the board in addressing risk appetite and being able to frame the role of AI in the organization. One of the other overlays there is that we are seeing—actually, we’ve just seen some statistics—that the single most common use by board members of AI is to summarize documents which are presented to them. So you’re seeing this where executives use AI to prepare things to communicate to the board, and the board are using AI to filter and assess what is presented to them by the executive. So it changes the nature of the communication as well as the relative governance responsibilities.
Dominique: Another responsibility for boards, by the way, is not only to see the risk but also the opportunities. In other words, to say to management: are you using AI to grasp all the opportunities we may have? Because some executive teams don’t do it, so the board also has this responsibility.
But yes, you’re right. Today, it’s mostly used as a simple tool for summarizing points, summarizing documents. And why not? In a way, the only issue I see here is, again, do you take the staircase or do you take the elevator? Taking AI is like taking the elevator—suddenly you have a good summary of all the documents you have received for preparing your board meeting. Fine, but having taken the elevator, you have not taken the staircase, you have not read the documents. You have not read between the lines—things which have not been said, but which are important for you as a board member.
So here, we have a lot of issues on how do we keep our attention at the right level in order not to miss things, especially when the documents given to the board are prepared by management. Management has some messages to give to the board, but as a board member, is that enough? Probably not. Therefore, here we start to see interesting attitudes from boards coming up, like: okay, I have all the documents from management, but I want to have more, and I will ask ChatGPT to check for more. For instance, what’s the reputation of the company today? What is said on social networks about the company? How do people on Glassdoor speak about the company? Does this become a board issue or not? As a board member, I have to judge it. Management will never tell me that, really, on Glassdoor, people are not happy with the company, but it’s an issue. As a board member, I need to know about it. So here I may use AI, and especially things like ChatGPT, to help me make my decisions and have an opinion. So I can see a lot of changes for boards, and overall, by the way, positive, if they keep a critical mind.
Ross: So a couple of things which I think are very, very interesting in what you said. One is that idea of keeping your attention at the right level, which goes back to the ideas of thrive and overload and allocating your attention in the right way, but that the levels at which that attention might be applied change as we get, for example, better consolidation of lower-level information. But as you point out, one of the positive aspects is that, historically, directors used to get not much more than just what management presented to them, and now there are far more ways to gain consolidated external insights or other things, to be able to gain perspective as a director beyond what management presents to you.
Dominique: And that’s new. That’s new for two reasons: because we have AI as a fabulous tool, which you can use to have more information. But we also have something else which has changed in the last two decades. Until two decades ago, until 2000 roughly, board meetings were mostly, not only in Europe, but mostly in nice places with a good dinner or a good lunch and good friends meeting together, to validate what the president or the CEO was presenting. Fine.
Between 2000 and 2010 things have changed. Suddenly, we entered a period where boards had to make sure we were compliant, and the word compliance took on huge importance in this decade. We had to make sure as board members that we were compliant with every possible regulation, so there was no legal risk.
Now we start to see that even compliance is not enough for boards. They start to say, okay, we are compliant, but are we in line with what may happen? We need to have some forecasting. Okay, we are compliant with existing law, but what is the new possible law going to change for us? You have a lot here, for instance, on AI regulation. You have a lot of AI regulation in Europe, in Australia, in China, in the US—they are different. As a board, you need to know what this new regulation will mean, and it will be too late if it enters into the compliance zone. Compliance zone is too late. You have to plan before, especially in Europe, because you are really planning a lot of regulations in Europe.
Personally, don’t take that as being against regulation or pro-regulation, because I think regulations are here to protect citizens, basically. Now, some of them are not good, but overall, it’s good regulation. But as a board or as an executive team, you have to forecast what possible regulation could mean to you, and even sometimes what they will mean. It’s very important with AI and environmental regulation—what they will mean as a competitiveness problem, because some regulation in some countries may harm your competitiveness in other countries. It’s exactly what Trump says today for some environmental or AI regulations. He wants to fight with Europe, because the regulations he finds in Europe are too tough. So this is a management problem, but it’s also a director’s problem. They need today to understand this much more.
How can they do that? Partially by asking ChatGPT and others, because you have access to an enormous amount of data which can help you to think about this. It doesn’t solve the problem, but it can help you as a director to think about what kind of issues may come out of new possible regulations or new possible regulatory threats for board members or risks for the board or the company. Here, I think it can be very useful, too, frankly. And I encourage—we encourage—boards to use it in this direction.
Ross: Pulling back to the bigger picture, around 10 years ago, you wrote a book on the impact of AI, before most people were anticipating that. I’d like you to reflect back on what you saw happening in the impact of AI then, where we’ve got to now, and the role of AI in business and strategy moving forward.
Dominique: I think 10 years ago, we didn’t have generative AI like ChatGPT. It was mostly the idea of artificial intelligence helping us to manage processes differently, to measure data differently, especially huge amounts of data. The impacts were mostly on marketing, communication, predictive maintenance, product design, and so on. We were seeing a revolution already. With ChatGPT and its colleagues, we see another revolution. But clearly, that’s the same issue: we have a lot of data now, we are just better at using this data.
To me, the next issues are: can we see the most important implications this will have on the way we work? For instance, agents in the labor force. What do we do with recruitment? What do we do with skills, on individuals? Ecology of mind—will it change the way we think? The answer is yes. But therefore, will we be able to enhance ourselves? Ecology of mind, to me, is a huge issue for the future, and it leads to an ecology of organization. What do we need to change within the organization—in structures, in systems, recruitment systems, for instance, evaluation systems? Therefore, next is, how do we plan to change the KPIs, because we will need to have new KPIs. What are these new KPIs? So I see a lot of major changes coming up on which we don’t have enough information yet.
I mentioned earlier that today we see the recruitment of programmers being more towards older programmers than being driven towards young programmers. Young programmers have fewer opportunities on the market. Why? Because today it’s easy to have the equivalent of a young programmer with ChatGPT, right? But if you have fewer young programmers, what will happen 10 years from now? You will not have seniors, because they will not have been able to be trained in the difficulties of programming. You see this in programming companies, in legal companies, and in consulting companies. The juniors are replaced by some AI—great, it’s useful, you win on productivity, etc.—but what do you lose on training, on making mistakes, on learning, on failing? If we lose that, what kind of people will we have in the future? What kind of ecology of mind will they have? Will we be able, collectively, and how will we do that, to reinvent the way our mind is working so that we don’t have an atrophy of our mind, an atrophy of our cognition?
For instance, let’s illustrate this with a simple biological example. Some studies were made for taxi drivers in London on their brains, and we discovered that they had a broader part for geolocalization, because their exam was the most difficult in the world—they had to know every street in London, and London is a big city. Then, about 10 years ago, they were allowed to use GPS, and people in charge of brain studies have seen that this part of their brain has atrophied. In other words, they don’t have the same brain as before. Now the question is, is it a pure atrophy, or did they replace this part of the brain with some other capabilities?
To me, that’s a very interesting point. If we atrophy our brain thanks to ChatGPT and other AI systems, will we be able to develop something else? That would be great, like creativity or whatever. I’m not sure. This is a big question for neuroscientists, but also for us as managers and as people, even as individuals.
If I use ChatGPT more, will I develop something somewhere else in my brain? If I take a very practical example, I still use a lot of spelling checkers when I write anything. I was good at spelling, so the spelling checker shows mistakes I would have identified if I had paid more attention. That’s fine, because I know how to spell, but if I were bad from the start, the spell checker would just improve my spelling, but it would not improve my skill. It would just correct things. Same thing—professors start to have an issue with this today. They have students who make very good reports. You ask them something about Plato and the cave, and they write you a fabulous five pages on Plato and the cave. Then you ask the student orally what he thinks about this, even what he thinks about the text he just gave, and then he is very poor. He is atrophic. He doesn’t know exactly why even ChatGPT wrote that. You see the point.
It’s a very interesting point on our mind. How are we going to think tomorrow, all of us? How are we going to expand? You were saying that AI is amplified cognition, yes, but how can we really be sure we benefit from this amplified cognition? I even start to see with students an issue here, because some students say, okay, I do a good writing paper, but I know I’m bad. If I’m asked about this paper, what does it mean? First, I doubt myself. Am I really as good as my writing paper says? No, I know I’m not. Therefore, doesn’t it increase in me the imposter complex? I’m an imposter because I am really an imposter—I did not do this writing text, it was ChatGPT, and if I’m asked, I’m lost.
So don’t you think that we may have a self-confidence issue soon, for people who rely too much on these new technologies? You and I, we are old enough to have good experience, to know and to sort out what’s good, what’s bad, with what ChatGPT gives us. Fine. But for people who don’t have some experience, how will they work with this? How can we help them really amplify their cognition, their critical mind, etc., instead of amplifying it? Am I clear? Do you see what I mean?
Ross: Absolutely. I mean, I see that both things are happening. There are undoubtedly people who are using cognitive offloading—they are reducing their capabilities. I think we can design, and we should be designing, AI so that if we interact with it, it is something which not just enables us to be better with the use of the tools, but after you take away the tools, we are still improved. Broadly, some people, I think, inevitably will have reduced capabilities in some domains. And I think your point around this idea of, essentially, if we offload some things, then we’re able to potentially do more things better, but we do need to design for that.
But just going back to your point around the ecology of mind—Gregory Bateson wrote his wonderful book, “Steps to an Ecology of Mind.” That’s extraordinarily relevant today. This is pulling back to the big picture: rather than just, okay, we’re chatting, we’re using this for a particular task, and so on, we are thinking of the entire ecosystem of humans, of organizations, of AI, and so on. Just to round out, perhaps I’d like to get any thoughts around what you see as the positive potential. What can we or should we be doing to facilitate a robust, rich, generative ecosystem, or ecology of mind?
Dominique: That’s a very good question. I’m not sure I have the answer, but I think we need to keep the question in mind—all of us. We need to work on it. I was very much influenced by Gregory Bateson when I was a student. I think this guy had a fabulous view—he’s a rich philosopher, so there’s a lot of stuff—but one which impressed me most was this notion that technology changes our mind, and today it’s extremely current.
So we have to understand, and especially the older generation—people over 30 who did not do their studies with ChatGPT—we have to understand how our mind was working, and we have to help ourselves and younger people understand how to have our mind work and how it has changed. We are facing several issues. One is the cognitive atrophy I was mentioning before. One is data or knowledge overload—cognitive overload—which we have, in particular with social networks. Social networks have very good AI helping people to stick, but they stick on listening. They don’t stick on creating. It’s like watching TV. It’s nice to watch TV, but you are not involved in watching TV, and we need to understand how we can involve our mind. Our mind only develops if we use it.
If you talk to neuroscientists, they tell you that the best way to avoid Alzheimer’s is to have activities which require three components: one, it has to be difficult; two, it has to be fun; and three, it has to be varied—various activities where you get fun. This is very true. If I’m in front of my social network, whichever, it might be fun, but it’s not difficult and it’s not varied. If I only play bridge, for instance, and I have fun about it, but if I only play bridge, it’s not varied. Therefore, I don’t avoid Alzheimer’s. To me, that’s a very important point for management, for dealing with people, with teams. How do we create enough fun, difficulty, and variety? If we do that, we help the brain of our people to develop. We have an ecology of mind. I know it’s simple, but it’s not that simple to put in practice when you manage a team.
Ross: I think that’s a wonderful, wonderful point to wrap up on. So where can people go to find out more about your work?
Dominique: Well, my latest books are in French, because most of my audience is in French. I do conferences in English all over Europe, but if they read French, they can go to the blog called Xerfi Canal—X, E, R, F, I, canal, like a canal—where I have a podcast once a month on one of these issues, a lot on AI, by the way. So you can go to the podcast, put a keyword, and you will find some of my speeches. Now, with AI, you can have all these translated into Chinese, if you want. So that’s quite easy today.
Ross: Fantastic. Thank you so much for your time and your insights today, Dominique.
Dominique: Thank you. It was very, very nice—first to see you again, I hadn’t seen you for quite a while, and to have this conversation with you. I think we need to do a conversation like this two years from now, and to come back on what we said today. Where will we have made progress, namely on this type of your last question? To me, it’s the most important: how will we be able to develop an ecology of mind, but also the one of people we work with? I don’t have the answer yet, but I think we have to work on this one.
Ross: Let’s both work on that and then regroup in two years then.
Dominique: Thank you very much.
Podcast: Play in new window | Download






