“The big picture is that every human on Earth deserves to live a life worth living… free of mental strife, physical strife, and the strife of war.”
– Matt Lewis

About Matt Lewis
Matt is CEO, Founder and Chief Augmented Intelligence Officer of LLMental, a Public Benefit Limited Liability Corporation Venture Studio focused on augmenting brain capital. He was previously Chief AI Officer at Inizio Health, and contributes in many roles including as a member of OpenAI’s Executive Forum, Gartner’s Peer Select AI Community and faculty at the World Economic Forum’ New Champions’ initiative.
What you will learn
-
Using AI to support brain health and mental well-being
-
Redefining mental health with lived experience leadership
-
The promise and danger of generative AI in loneliness
-
Bridging neuroscience and precision medicine
-
Citizen data science and the future of care
-
Unlocking human potential through brain capital
-
Shifting from scarcity mindset to abundance thinking
Episode Resources
Transcript
Ross Dawson: Matt, it’s awesome to have you on the show.
Matt Lewis: Thank you so much for having me. Ross, it’s a real pleasure and honor. And thank you to everyone that’s watching, listening, learning. I’m so happy to be here with all of you.
Ross: So you are focusing on using AI amongst other technologies to increase brain capital. So what does that mean?
Matt: Yeah. I mean, it’s a great question, and it’s, I think, the challenge of our time, perhaps our generation, if you will.
I’ve been in artificial intelligence for 18 years, which is like an eon in the current environment, if you will. I built my first machine learning model about 18 years ago for Parkinson’s disease, under a degenerative condition where people lose the ability to control their body as they wish they would.
I was working at Boehringer Ingelheim at the time, and we had a drug, a dopamine agonist, to help people regain function, if you will. But some small number of people developed this weird side effect, this adverse event that didn’t appear in clinical trials, where they became addicted to all sorts of compulsive behaviors that made their actual lives miserable. Like they became shopping addicts, or they became compulsive gamblers. They developed proclivities to sexual behaviors that they didn’t have before they were on our drug, and no one could quite figure out why they had these weird things happening to them.
And even though they were seeing the top academic neurologists in this country, United States, or other countries, no one can say why Ross would get this adverse event and Matt wouldn’t. It didn’t appear in the studies, and there’s no way to kind of figure it out.
The only thing that kind of really sussed out what was an adverse event versus what wasn’t was advanced statistical regression and later machine learning. But back in the days, almost 20 years ago, you needed massive compute, massive servers—like on trucks—to be able to ship these types of considerations to actually improve clinical outcomes.
Now, thankfully, the ability to provide practical innovation in the form of AI to help improve people’s actual lives through brain health is much more accessible, democratisable, almost in a way that wasn’t available then.
And if it first appeared for motor symptoms, for neurodegenerative disease, some time ago, now we can use AI to help not just the neurodegenerative side of the spectrum but also neuropsychiatric illness, mental illness, to help identify people that are at risk for cognition challenges.
Here in Manhattan, it’s like 97 degrees today. People don’t think the way they normally do when it’s 75. They make decisions that they perhaps wish they hadn’t, and a lot of the globe is facing similar challenges.
So if we can kind of partner with AI to make better decisions, everyone’s better off.
That construct—where we think differently, we make better decisions, we are mentally well, and we use our brains the way that was intended—all those things together are brain capital. And by doing that broadly, consistently, we’re better off as a society.
Ross: Fantastic. So that case, you’re looking at machine learning—so essentially being able to pull out patterns. Patterns between environmental factors, drugs used, background, other genetic data, and so on.
So this means that you can—is this, then, alluding, I suppose, to precision medicine and being able to identify for individuals what the right pharmaceutical regimes are, and so on?
Matt: Yeah. I mean, I think the idea of precision medicine, personalized medicine, is very appealing. I think it’s very early, maybe even embryonic, kind of consideration in the neuroscience space.
I worked for a long time for companies like Roche and Genentech, others in that ecosystem, doing personalized medicine with biomarkers for oncology, for cancer care—where you knew a specific target, an enzyme, a protein that was mutated and there was a degradation, and identified which enzyme was a bit remiss.
Then tried to build a companion diagnostic to find the signal, if you will, and then help people that were suffering.
It’s a little bit more—almost at risk of saying—straightforward in that regard, because if someone had the patient, you knew that the drug would work.
Unfortunately, I think there’s a common kind of misconception—I know you know this exceptionally well, but there are people out there, perhaps listening, that don’t know it as well—that the state of cognitive neuroscience, that is what we know of the brain or how the brain works and how it works in the actual world in which we live, on planet Earth and terra firma, is probably about as far advanced as the state of the heart was when Jesus Christ walked the Earth about 2,000 years ago.
That is, we probably have about 100 years of knowledge truly about how the brain truly works in the world, and we’re making decisions about how to engineer personalized medicine for a very, very, very young, nascent science called the brain—with almost no real kind of true, practical, contextual understanding of how it really works in the world.
So I think personalized medicine has tremendous possible promises. The reality of it doesn’t really pan out so well.
There are a couple of recent examples of this from companies like Nomura, Alto Neuroscience, and the rest, where they try to build these kind of ex post facto precision medicine databases of people that have benefited from certain psychiatric medicines.
But they end up not being as beneficial as you’d like them to be, because we just don’t know really a lot about how the brain actually works in the real world.
There even still is the debate for people—but even if you extend past the brain and mind debate—I think it’s hard to find the number of people that are building in the space that will recognize contextual variables beyond the brain and mind.
Including things like the biopsychosocial continuum, the understanding of spirituality and nature, all the rest.
All these things are kind of moving and changing and dynamic at a constant equilibrium.
And to try to find a point solution that says Matt or Ross are going to be beneficial at this one juncture, and they’re going to change it right now—it’s just exceptionally difficult. Important, but exceptionally difficult.
So I think the focus is more about how do we show up in the real world today, using AI to actually help our actual life be meaningful and beneficial, rather than trying to find this holy grail solution that’s going to be personalized to each person in 2026.
I’m not very optimistic about that, but maybe by 2036 we’ll get a little closer.
Ross: Yeah. So, I mean, I guess, as you say, a lot of what people talk about with precision medicine is specific biomarkers and so on, that you can use to understand when particular drugs would be relevant.
But back to the point where you’re starting with this idea of using machine learning to pick up patterns—does this mean you can perhaps be far more comprehensive in seeing the whole person in their context, environment, background, and behaviors, and so on, to be able to understand what interventions will make sense for that individual, and all of the whole array of patterns that the person manifests?
Matt: Yeah, I think it’s a great question. I think the data science and the kind of health science of understanding, again, kind of what might be called the inactive psychiatry of the person—how they make meaning in the world—is just now starting to catch up with reality.
When I did my master’s thesis 21 years ago in health services research, there were people trying to figure out: if you were working in the world, how do we understand when you’re suffering with a particular illness, what it means to you?
It might mean to the policy wonks that your productivity loss is X, or your quality-adjusted life years is minus Y. Or to your employer, that you can’t function as much as you used to function. But to you—does it really matter to you that your symptom burden is A or Z? Or does it really matter to you that you can’t sleep at night?
If you can’t sleep at night, for most people, that’s really annoying. And if you can’t sleep at night six, seven, ten nights in a row, it’s catastrophic because you almost can’t function. Whereas on the quality score, it doesn’t even register—it’s like a rounding error.
So the difference between the patient-reported outcomes for what matters for real people and what it matters to the decision-makers—there’s a lot of daylight between those things, and there has been for a long time.
In the neuropsychiatric, mental health, brain health space, it’s starting to catch up, for I think a couple of reasons.
One, the lived experience movement. I chair the One Mind Community Advisory Network here in the States, which is a group of about 40 lived experience experts with deep subject matter expertise, all of whom suffer from neuropsychiatric illness, neurodivergence, and the rest. These are people that suffer daily but have turned their pain into purpose.
The industry at large has seen that in order to build solutions for people suffering from different conditions, you need to co-create with those people. I mean, this seems intuitive to me, but for many years—for almost all the years, 100 years—most solutions were designed by engineers, designed by scientists, designed by clinicians, without patients at the table.
When you build something for someone without the person there, you get really pretty apps and software and drugs that often don’t work. Now, having the people actually represented at the table, you get much better solutions that hopefully actually have both efficacy in the lab and effectiveness in the real world.
The other big thing I think that’s changing a lot is that people have more of a “citizen data scientist” kind of approach. Because we’re used to things like our Apple Watch, and our iPads, and our iPhones, and we’re just in the world with data being in front of us all the time, there’s more sensitivity, specificity, and demand for visibility around data in our life.
This didn’t exist 20 years ago.
So just to be in an environment where your mental health, your brain health, is being handed to you on a delivery, if you will—and not to get some kind of feedback on how well it’s working—20 years ago, people were like, “Okay, yeah, that makes sense. I’m taking an Excedrin for my migraine. If it doesn’t work, I’m clearing to take a different medicine.”
But now, if you get something and you don’t get feedback on how well it’s working, the person or organization supporting it isn’t doing their job.
There’s more of an imprimatur, if you will, of expectation on juxtaposing that data analytics discipline, so that people understand whether they’re making progress, what good looks like, are they benchmarking against some kind of expectation—and then, what the leaderboard looks like.
How is Ross doing, versus how Matt’s doing, versus what the gold standard looks like, and all the rest. This didn’t exist a generation ago, but now there’s more to it.
Ross: That’s really interesting. This rise of citizen science is not just giving us data, but it’s also the attitude of people—that this is a normal thing to do: to participate, to get data about themselves, to share that back, to have context.
That’s actually a really strong positive feedback loop to be able to develop better things.
So I think, as well as this idea of simply just getting the patients at the table—so we’ve talked quite a bit, I suppose, from this context of machine learning—of course, generative AI has come along.
So, first of all, just a big picture: what are the opportunities from generative AI for assisting mental well-being?
Matt: Yeah. I mean, first of all, I am definitely a technophile. But that notwithstanding, I will say that no technology is either all good or all bad. I think it’s in the eyes of the beholder—or the wielder, if you will.
I’ve seen some horrific use cases of generative AI that really put a fear into my heart. But I’ve also seen some amazing implementations that people have used that give me a tremendous amount of hope about the near and far future in brain health broadly, and in mental health specifically.
Just one practical example: in the United States and a lot of the English-speaking countries—the UK, New Zealand, and Australia—there is a loneliness epidemic.
When I say loneliness, I don’t mean people that are alone, that either choose to be alone or live lives that are alone. I actually mean people that have a lower quality of life and are lonely, and as a result, they die earlier and they have more comorbid illness. It’s a problem that needs to be solved.
In these cases, there are a number of either point solutions that are designed specifically using generative AI or just purpose-built generative AI applications that can act both as a companion and as a thought partner for people who are challenged in their contextual environment.
They act in ways where they don’t have other access or resources, and in those times of need, AI can get them to a place where they either catalyze consideration to get back into an environment that they recall being useful at an earlier point.
For example, they find an interest in something that they found utility in earlier—like playing chess, or playing a card game, a strategy game, or getting back to dancing or some other “silly” thing that to them isn’t silly, but might be silly to a listener.
And because they rekindle this interest, they go and find an in-person way of reigniting with a community in the environment. The generative AI platform or application catalyzes that connection.
There are a number of examples like that, and the AI utility case is nearly free. The use of it is zero cost for the person, but it prevents them from slipping down the slippery slope of an actual DSM-5 psychiatric illness—like depression or anxiety—and becoming much, much worse.
They’re kind of rescued by AI, if you will, and they become closer to healthy and well because they either find a temporary pro-social kind of companion or they actually socialize and interact with other humans.
I have seen some kind of scary use cases recently where people who are also isolated—I won’t use the word lonely—don’t have proper access to clinicians.
In many places around the world, there is a significant shortage of licensed professionals trained in mental health and mental illness. In many of these cases, when people don’t have a diagnosed illness or they have a latent personality disorder, they have other challenges coming to the fore and they rely on generative AI for directional implementation.
They do something as opposed to think something, and it can rapidly spiral out of control—especially when people are using GPTs or purpose-built models that reinforce vicious cycles or feedback loops that are negatively reinforcing.
I’ve seen some examples, due to some of the work I do in the lived experience community, where people have these built-in cognitive biases around certain tendencies, and they’ll build a GPT that reinforces those tendencies.
What starts out as a harmless comment from someone in their network—like a boyfriend, employee, or neighbor—suddenly becomes the millionth example of something that’s terrible. The GPT reinforces that belief.
All of a sudden, this person is isolated from the world because they’ve cut off relationships with everyone in their entire circle—not because they really believe those things, but because their GPT has counseled them that they should do these things.
They don’t have anyone else to talk to, and they believe they should do them, and they actually carry those things out. I’ve seen a couple of examples like this that are truly terrifying.
We do some work in the not-for-profit space trying to provide safe harbors and appropriate places for care—where people have considerations of self-harm, where a platform might indicate that someone is at risk of suicide or other considerations.
We try to provide a place where people can go to say, “Is this really what you’re thinking?” If so, there’s a number to call—988—or someone you can reach out to as a clinician.
But I think, like all technologies: you can use a car to drive to the grocery store. You could also use the same car to run someone over.
We have to really think about: what in the technology is innate to the user, and what it was really meant to do?
Ross: Yeah. Well, it’s a fraught topic now, as in there are, as you say, some really negative cases. The commercial models, with their tendency toward sycophancy and encouraging people to continue using them, start to get into all these negative spirals.
We can and have, of course, some clinically designed tools—generative AI tools to assist—but not everybody uses those. One of the other factors, of course, is that not everybody even has the finances, or the finance isn’t available to provide clinicians for everybody. So it’s a bit fraught.
I go back to 15 years ago, I guess—Paro, the robot seal in Japan—which was a very cute, cuddly robot given to people with neurodegenerative diseases. They came out of their shell, often. They started to interact more with other people just through this little robot.
But as you say, there is the potential then for these not to be substitutes. Many people rail against, “Oh, we can’t substitute real human connection with AI,” and that’s obviously what we want.
But it can actually help re-engage people with human connection—in the best circumstances.
Matt: Yeah. I mean, listen, if I was doing this discussion with almost any other human on planet Earth, Ross, I would probably take that bait and we could progress it.
But I’m not going to pick that up with you, because no one knows this topic—of what humans can, should, and will potentially do in the future—better than you, than any other human. So I’m not going to take that.
But let me comment one little thing on the mental health side. The other thing that I think people often overlook is that, in addition to being a tool, generative AI is also a transformative force.
The best analogy I have comes from a friend of mine, Connor Brennan, who’s one of the top AI experts globally. He’s the Chief AI Architect at NYU here in New York City.
He says that AI is like electricity in this regard: you can electrify things, you can build an electrical grid, but it’s also a catalyst for major advances in the economy and helps power forward the industry at large.
I think generative AI is exactly like that. There are point solutions built off generative AI, but also—especially in scientific research and in the fields of neurotechnology, neuroscience, cognition, and psychology—the advances in the field have progressed more in the last three years post–generative AI, post–ChatGPT, than in the previous 30 years.
And what’s coming—and I’ve seen this in National Academy of Medicine presentations, NIH, UK ARIA, and other forums—what’s coming in the next couple of years will leapfrog even that.
It’s for a couple of reasons. I’m sure you’re familiar with this saying: back in the early 2000s, there was a saying in the data science community, “The best type of machine learning is no machine learning.”
That phrase referred to the fact that it was so expensive to build a machine learning model, and it worked so infrequently, that it was almost never recommended. It was a fool’s errand to build the thing, because it was so expensive and worked so rarely.
When I used to present at conferences on the models we would build, people always asked the same questions: What was the drift? How resilient was the model? How did we productionize it? How was it actually going to work?
And it was—frankly—kind of annoying, because I didn’t know if it was going to work myself. We were just kind of hoping that it would.
Now, over the last couple of years, no one asks those questions. Now people ask questions like: “Are robots going to take my job?” “How am I going to pay my mortgage?” “Are we going to be in the bread lines in three years?” “Are there going to be mass riots?”
That’s what people ask about now. The conversation has shifted over the last five years from “Will it work?” to “It works too well. What does it mean for me—for my human self?”
“How am I going to be relevant in the future?”
I think the reason why that is, is because it went from being kind of a tactical tool to being a transformative force.
In the scientific research community, what’s really accelerating is our ability to make sense of a number of data points that, up until very recently, people saw as unrelated—but that are actually integrated, part of the same pattern.
This is leading to major advances in fields that, up until recently, could not have been achieved.
One of those is in neuroelectronics. I’m very excited by some of the advances in neurotechnology, for example—and we have an equity interest in a firm in this space.
Implantable brain considerations is one major place where mental illness can advance. AI is both helping to decipher the language of communication from a neuroplasticity standpoint, and making it possible for researchers and clinicians to communicate with the implant in your brain when you’re not in the clinic.
So, if you go about your regular life—you go to work, you play baseball, you do anything during your day—you can go about your life, and because of AI, it makes monitoring the implant in your brain no different than having a continuous glucose monitor or taking a pill.
The advances in AI are tremendous—not just for using ChatGPT to write a job description—but for allowing things like bioelectronic medicine to exist and be in the clinic in four or five years from now.
Whereas, 40 years ago, it would have been considered magic to do things like that.
Ross: So, we pull this back, and I’d like to come back to where we started. Before we started recording, we were chatting about the big picture of brain capital.
So I just want to think about this idea of brain capital. What are the dimensions to that? And what are the ways in which we can increase it? What are the potential positive impacts? What is the big picture around this idea of brain capital?
Matt: Yeah. I mean, the big picture is that every human on Earth deserves to live a life worth living. It’s really that simple. Every person on planet Earth deserves to have a life that they enjoy, that they find to be meaningful and happy, and that they can live their purpose—every person, regardless of who they’re born to, their religion, their race, their creed, their region.
And they should be free of strife—mental strife, physical strife, and the strife of war. For some reason, we can’t seem to get out of these cycles over the last 100,000 years.
The thesis of brain capital is that the major reason why that’s been the case is that a sixth of the world’s population currently has mental illness—diagnosed or undiagnosed. About a quarter of the world’s population is living under what the World Health Organization calls a “brain haze” or “brain fog.”
We have a kind of collective sense of cognitive impairment, where we know what we should do, but we don’t do it—either because we don’t think it’s right, or there are cultural norms that limit our ability to actually progress forward.
And then the balance of people are still living with a kind of caveman mindset. We came out of the caves 40,000–60,000 years ago, and now we have iPhones and generative AI, but our emotions are still shaped by this feeling of scarcity—this deficit mindset, where it feels like we’re never going to have the next meal, we’re never going to have enough resources.
It’s like less is more all the time.
But actually, right around the corner is a mindset of abundance. And if you operate with an abundance mindset, and believe—as Einstein said—that everything is a miracle, the world starts responding appropriately.
But if you act like nothing is a miracle, and that it’s never going to be enough, that’s the world through your eyes.
So the brain capital thesis is: everyone is mentally well, everyone is doing what’s in the best collective interest of society, and everyone is able to see the world as a world of abundance—and therefore, a life worth living.
Ross: That is awesome. No, that’s really, really well put. So, how do we do it? What are the steps we need to take to move towards that?
Matt: Yeah. I mean, I think we’re already walking the path. I think there are communities—like the ones that we’ve been together on, Ross—and others that are coming together to try to identify the ways of working, and putting resources and energy and attention to some of these challenges.
Some of these things are kind of old ideas in new titles, if you will. And there are a number of trajectories and considerations that are progressing under new forms as well.
I think one of the biggest things is that we really need both courage to try new ways of working, and also—to use a Napoleon expression—Napoleon said that a leader’s job is to be a dealer in hope.
We really need to give people the courage to see that the future is brighter than the past, and that nothing is impossible.
So our considerations in the brain capital standpoint are that we need to set these moonshot goals that are realistic—achievable if we put resources in the right place.
I’ve heard folks from the World Economic Forum, World Health Organization, and others say things like: by this time next decade—by the mid-2030s—we need to cure global mental illness completely. No mental illness for anyone.
By 2037–2038, we need to prevent brain health disorders like Alzheimer’s, Parkinson’s, dystonia, essential tremor, epilepsy, etc.
And people say things like, “That’s not possible,” but when you think about other major chronic illnesses—like Hepatitis C or breast cancer—when I was a kid, either of those things were death sentences. Now, they’re chronic illnesses or they don’t exist at all.
So we can do them. But we have to choose to do them, and start putting resources against solving these problems, instead of just saying, “It can’t be done.”
Ross: Yeah, absolutely. So, you’ve got a venture in this space. I’d love to round out by hearing about what you are doing—with you and your colleagues.
Matt: So, we’re not building anything—we’re helping others build. And that’s kind of a lesson learned from experience.
To use another quote that I love—it’s a Gandhi quote—which is, “I never lose. I only win or I learn.”
So we tried our hand at digital mental health for a time, and found that we were better advisors and consultants and mentors and coaches than we were direct builders ourselves.
But we have a firm. It’s the first AI-native venture studio for brain capital, and we work with visionary entrepreneurs, CEOs, startups—really those that are building brain capital firms.
So think: mental illness, mental health, brain health, executive function, mindset, corporate learning, corporate training—that type of thing. Where they have breakthrough ideas, they have funding, but they need consideration to kind of help scale to the ecosystem.
We wrap around them like a halo and help support their consideration in the broader marketplace.
We’re really focused on these three things: mental health, mindset, and mental skills.
There are 12 of us in the firm. We also do a fair amount of public speaking—workshops, customer conferences, hackathons. The conference we were just at last week in San Francisco was part of our work.
And then we advise some other groups, like not-for-profits and the government.
Ross: Fantastic. So, what do you hope to see happen in the next five to ten years in this space?
Matt: Yeah, I’m really optimistic, honestly. I know it’s a very tumultuous time externally, and a lot of people are suffering. I try to give back as much as possible.
We, as an organization, we’re a public benefit corporation, so we give 10% of all our revenue to charity. And I volunteer at least a day a month directly in the community. I do know that a lot of people are having a very difficult time at present.
I do feel very optimistic about our mid- and long-term future. I think we’re in a very difficult transition period right now because of AI, the global economic environment, and the rest. But I’m hopeful that come the early 2030s, human potential broadly will be optimized, and many fewer people on this planet will be suffering than are suffering at present.
And hopefully by this time next decade, we’ll be multi-planetary, and we’ll be starting to focus our resources on things that matter.
I remember there was a quote I read maybe six or seven years ago—something like: “The best minds of our generation are trying to get people to click on ads on Facebook.” When you think about what people were doing 60 years ago—we were building the space shuttle to the moon.
The same types of people that would get people to click on ads on Meta are now trying to get people to like things on LinkedIn. It’s just not a good use of resources.
I’ve seen similar commentary from the Israeli Defense Forces. They talk about all the useless lives wasted on wars and terrorism. You could think about not fighting these battles and start thinking about other ways of helping humanity.
There’s so much progress and potential and promise when we start solving problems and start looking outward, if you will.
Ross: Yeah. You’re existing in the world that is pushing things further down that course. So where can people find out more about your work?
Matt: Right now, LinkedIn is probably the best way.
We’re in the midst of a merger of equals between my original firm, Elemental, and my business partner John Nelson’s firm, John Nelson Advisors. By Labor Day (U.S.), we’ll be back out in the world as iLIVD—i, L, I, V, D—with a new website and clout room and all the rest.
But it’s the same focus: AI-native venture studio for brain health—just twice the people, twice the energy, and all the consideration.
So we’re looking forward to continuing to serve the community and progressing forward.
Ross: No, it’s fantastic. Matt, you are a force for positive change, and it’s fantastic to see not just, obviously, the underlying attitude, but what you’re doing. So, fantastic. Thank you so much for your time and everything you’re doing. Thank you again.
Matt: Thank you again Ross, I really appreciate you having me on, and always a pleasure speaking with you.
Podcast: Play in new window | Download