What are the goals I really want to attain professionally and personally? I’m going to really keep my eye on that. And how do I make sure that I use AI in a way that’s going to help me get there—and also not use it in a way that doesn’t help me get there?
– Minyang Jiang (MJ)

About Minyang Jiang (MJ)
Minyang Jiang (MJ) is Chief Strategy Officer at business lending firm Credibly, leading and implementing the company’s growth strategy. Previously she held a range of leadership positions at Ford Motor Company, most recently as founder and CEO of GoRide Health, a mobility startup within Ford.
What you will learn
-
Using ai to overcome human constraints
-
Redefining productivity through augmentation
-
Nurturing curiosity in the modern workplace
-
Building trust in an ai-first strategy
-
The role of imagination in future planning
-
Why leaders must engage with ai hands-on
-
Separating the product from the person
Episode Resources
People
Technical Terms & Concepts
Transcript
Ross Dawson: MJ, it’s a delight to have you on the show.
Minyang “MJ” Jiang: I’m so excited to be here, Ross.
Ross: So I gather that you believe that we can be more than we are. So how do we do that?
MJ: Absolutely I’m an eternal optimist, so I’m always—I’m a big believer in technology’s ability to help enable humans to be more if we’re thoughtful with it.
Ross: So where do we start?
MJ: Well, we can start maybe by thinking through some of the use cases that I think AI, and in particular, generative AI, can help humans, right?
I come from a business alternative financing perspective, but my background is in business, and I think there’s been a lot of sort of fear and maybe trepidation around what it’s going to do in this space. But my personal understanding is, I don’t know of a single business that is not constrained, right? Employees always have too much to do. There are things they don’t like to do. There’s capacity issues.
So for me, already, there’s three very clear use cases where I think AI and generative AI can help humans augment what they do. So number one is, if you have any capacity constraints, that is a great place to be deploying AI because already we’re not delivering a good experience. And so any ability for you to free up constraints, whether it’s volume or being able to reach more people—especially if you’re already resource-constrained (I argue every business is resource-constrained)—that’s a great use case, right?
The second thing is working on a use case where you are already really good at something, and you’re repeating this task over and over, so there’s no originality. You’re not really learning from it anymore, but you’re expected to do it because it’s an expected part of your work, and it delivers value, but it’s not something that you, as a human, you’re learning or gaining from it.
So if you can use AI to free up that part, then I think it’s wonderful, right? So that you can actually then free up your bandwidth to do more interesting things and to actually problem-solve and deploy critical thinking.
And then I think the third case is just, there are types of work out there that are just incredibly monotonous and also require you to spend a lot of time thinking through things that are of little value, but again, need to be done, right? So that’s also a great place where you can displace some of the drudgery and the monotony associated with certain tasks.
So those are three things already that I’m using in my professional life, and I would encourage others to use in order to augment what they do.
Ross: So that’s fantastic. I think the focus on constraints is particularly important because people don’t actually recognize it, but we’ve got constraints on all sides, and there’s so much which we can free up.
MJ: Yes, I mean, I think everybody knows, right? You’re constrained in terms of energy, you’re constrained in terms of time and budget and bandwidth, and we’re constrained all the time.
So using AI in a way that helps you free up your own constraints so that it allows you to ask bigger and better questions—it doesn’t displace curiosity. And I think a curious mind is one of the best assets that humans have.
So being able to explore bigger things, and think about new problems and more complicated problems. And I see that at work all the time, where people are then creating new use cases, right? And it just sort of compounds.
I think there’s new kinds of growth and opportunities that come with that, as well as freeing up constraints.
Ross: I think that’s critically important. Everyone says when you go to a motivational keynote, they say, “Curiosity, be curious,” and so on. But I think we, in a way, we’ve been sort of shunned.
The way work works is: just do your job. It doesn’t train us to be curious. So if, let’s say, we get to a job or workplace where we can say—we’re in a position of work where you can say—all right, well, all the routine stuff, all the monotony, we’ve done. Your job is to be curious.
How do we help people get to that thing of taking the blinkers off and opening up and exploring?
MJ: I mean, I think that would be an amazing future to live in, right? I mean, I think that if you can live in a world where you are asked to think—where you’re at the entry level, you’re asked to really use critical thinking and to be able to build things faster and come up with creative solutions using these technologies as assistance—wouldn’t that be a better future for us all?
And actually, I personally would argue and believe that curiosity is going to be in high demand, way higher demand than it’s been in the future, because there is this element of spontaneous—like spontaneous thinking—which AI is not capable of right now, that humans are capable of.
And you see that in sort of—even sort of personal interactions, right? A lot of people use these tools as a way to validate and continue to reinforce how they think. But we all know the best friendships and the best conversations come from being called out and being challenged and discovering new things about yourself and the thing.
And that same sentiment works professionally. I think curiosity is going to be in high demand, and it’s going to be a sort of place of entry in terms of critical thinking, because those are the people that can use these tools to their best advantage, to come up with new opportunities and also solve new problems.
Ross: So I think, I mean, there is this—I say—I think those who are curious will, as you say, be highly valued, be able to create a lot of value. But I think there are many other people that have latent curiosity, as in, they would be curious if they got there, but they have been trained through school and university and their job to just get on with the job and study for the exam and things like that.
So how do we nurture curiosity in a workplace, or around us, or within?
MJ: I mean, I think this is where you do have this very powerful tool that is chat-based, for the most part, that you don’t require super technical skills to be able to access. At least today, the accessibility of AI is powerful, and it’s very democratizing.
You can be an artist now if you have these impulses but never got the training. Or you can be a better writer. You can come up with ideas. You can be a better entrepreneur. You can be a better speaker.
It doesn’t mean you don’t have to put in the work—because I still think you have to put in the work—but it allows people to evolve their identity and what they’re good at.
What it’s going to do, in my mind, rather than these big words like displacement or replacement, is it’s going to just increase and enhance competition.
There’s a great Wharton professor, Stefano Plutoni, who talked about photography before the age of digital photography—where people had to really work on making sure that your shutter speed was correct, you had the right aperture, and then you were in the darkroom, developing things.
But once you had digital photography, a lot of people could do those things. So we got more photographers, right? We actually got more people who were enamored with the art and could actually do it.
And so some of that, I think, is going to happen—there’s going to be a layering and proliferation of skills, and it’s going to create additional competition. But it’s also going to create new identities around: what does it mean to be creative? What does it mean to be an artist? What does it mean to be a good writer?
In my mind, those are going to be higher levels of performance. I think everyone having access to these tools now can start experimenting, and companies should be encouraging their employees to explore their new skills.
You may have someone who is a programmer who is actually really creative on the side and would have been a really good graphic artist if they had the training. So allowing that person to experiment and demonstrate their fluidity, and building in time to pursue these additional skill sets to bring them back to the company—I think a lot of people will surprise you.
Ross: I think that’s fantastic. And as you say, we’re all multidimensional. Whatever skills we develop, we always have many other facets to ourselves.
And I think in this world, which is far more complex and interrelated, expressing and developing these multiple skills gives us more—it allows us to be more curious, enabling us to find more things.
Many large firms are actively trying to find people who are poets or artists or things on the side. And as you say, perhaps we can get to workplaces where, using these tools, we can accelerate the expansion of the breadth of who we are to be able to bring that back and apply that to our work.
MJ: I mean, I’ve always been a very big fan of the human brain, right? I think the brain is just a wonderful thing. We don’t really understand it. It truly is infinite. I mean, it’s incredible what the brain is capable of.
We know we can unlock more of its potential. We know that we don’t even come close to fully utilizing it.
So now having these tools that sort of mimic reasoning, they mimic logic, and they can help you unlock other skills and also give you this potential by freeing up these constraints—I think we’re just at the beginning of that.
But a lot of the people I work with, who are working with AI, are very positive on what it’s done for their lives.
In particular, you see the elevated thinking, and you see people challenging themselves, and you see people collaborating and coming up with new ideas in volume—rewriting entire poorly written training manuals, because no one reads those, and they’re terrible. And frankly, they’re very difficult to write.
So being able to do that in a poetic and explicable way, without contradictions—I mean, even that in itself is a great use case, because it serves so many other new people you’re bringing into the company, if you’re using these manuals to train them.
Ross: So you’ve worked on Jedi, Jedi projects in the workplace—put this into practice, sort of. So I’d love to hear just, off the top of your mind, what are some of the lessons learned as you did that?
MJ: Yeah, we’ve been deploying a lot of models and working with our employee base to put them into production. We also encourage innovation at a very distributed level.
The biggest thing I will tell you is—change management. For me, the important part is in the management, right? Change—everybody wants change. Everyone can see the future, and I have a lot to say about what that means. But people want change, and it’s the management of change that’s really difficult. That requires thought leadership.
So when companies are coming out with this AI-first strategy, or organizations are adopting AI and saying “we are AI-first,” for me the most important lesson is strategically clarifying for employees what that means.
That actually isn’t the first thing we did. We actually started doing and working and learning—and then had to backtrack and be like, “Oh, we should have a point of view on this,” right?
Because it’s not the first thing. The first thing is just like, “Let’s just work on this. This is fun. Let’s just do it.” But having a vision around what AI-first means, and acknowledging and having deep respect for the complexities around that vision—because you are touching people, right? You’re touching people’s sense of self-worth. You’re touching their identities. You’re touching how they do work today and how they’re going to do work three to five years from now.
So laying that out and recognizing that we don’t know everything right now—but we have to be able to imagine what different futures look like—that’s important. Because a lot of the things I see people talking about today, in my view, is a failure of the imagination. It’s pinning down one scenario and saying, “This is the future we’re going to march towards. We don’t love that future, but we think it’s inevitable.”
As leaders—it’s not inevitable. So doing the due diligence of saying, “Let me think through and spend some time really understanding how this affects my people, and how I can get them to a place where they are augmented and they feel confident in who they are with these new tools”—that are disruptive—that’s the hard work. But that is the work I expect thought leadership and leaders to be doing.
Ross: Yes, absolutely right. And I think this—as you say—this getting any sense of the inevitable is deeply dangerous at best.
And as you say, any way of thinking about the future, we must create scenarios—partly because there are massive uncertainties, and perhaps even more importantly, because we can create the future. There are no inevitabilities here.
So what does that look like? Imagination comes first if we are building the company of the future. So how do we do that? Do we sit down with whiteboards and Post-it notes? What is that process of imagining?
MJ: There’s so many ways to do it, right? I mean, again—I took a class with a Wharton professor, Scott Snyder. He talked about “future-back” scenario planning, which is basically:
First, I think you talk to many different people. You want to bring in as many diverse perspectives as possible. If you’re an engineer, you talk to artists. If you’re a product person, you talk to finance people. You really want to harness everyone’s different perspectives.
And I think, along with the technology, there’s one thing that people should be doing. They should first of all think about defining—for your own function or your own department—what does it mean to be literate, proficient, and a master at AI? What are the skill sets you’re going to potentially need?
Then it’s really up to every company. I myself created a strategic framework where I can say, “Okay, I think there’s a spectrum of use cases all the way from a lot of automation to AI being simply an assistant.” And I ask different people and functions in the company to start binning together what they’re doing and placing them along this spectrum.
Then I would say: you do this many times. You write stuff down. You say, “Okay, perhaps I’m wrong. Let’s come up with an alternate version of this.”
There are several levers that I think a lot of people could probably identify with respect to their industry. In my industry, one of the most important is going to be trust. Another one is going to be regulation. Another one is going to be customer expectation.
So when I lay out these levers, I start to move them to the right and left. Then I say, “Well, if trust goes down in AI and regulations go up, my world is going to look very different in terms of what things can be automated and where humans come in.”
If trust goes up and regulations go down, then we have some really interesting things that can happen.
Once you lay out multiple of these different kinds of scenarios, the thing you want to look for is: what would you do the same in each one of these scenarios? Would you invest in your employees today with respect to AI?
And the answer is always yes—across every single scenario. You will never have less ROI. You will always be investing in employees to get that ROI.
So now you look at the things and say, “What am I going to do in my AI-first strategy that’s going to position me well in any future—or in a majority of futures?”
Those are the things you should be doing first, right now.
Then you can pick a couple of scenarios and say, “Okay, now I need to understand: if this were to change, my world is going to be really different. If that were to change, my world is going to be really different.”
How do I then think through what are the next layer of things I need to do?
Just starting with that framework—to say, what are the big levers that are going to move my world? Let’s assume these things are true. Let’s assume those things are true. What do my worlds look like?
And then, is there any commonality that cuts across the bottom? The use cases I gave earlier—around training, freeing up capacity—that cuts across every single scenario. So it makes sense to invest in that today.
I’m a big believer in employee training and development, because I always think there’s return on that.
Ross: That’s really, really good. And I can just, I can just imagine a visual framework laid out just as you’ve described. And I think that would be extremely useful for any organization.
So you mentioned trust. There’s obviously multiple layers of trust. There’s trust in institutions. There’s trust in companies—as you mentioned, in financial customer service, very relevant. There’s trust in society. There’s trust in AI. There’s trust in your peers.
And so this is going to be fundamental. Of course, your degree of trust—or appropriate trust—in AI systems is a fundamental enabler or determinant of how you can get value from them. Absolutely.
So how do we nurture appropriate trust, as it were, within workplaces with technology in order to be able to support something which can be as well-functioning as possible?
MJ: Yeah. I mean, I think trust is foundationally going to remain the same, right? Which is: do you know what is the right thing to do, and do people believe that you’re going to consistently execute on that right thing, right?
So companies that have values, that have principles that are well-defined, are going to continue to capitalize on that. There’s no technology that’s going to change that.
Trust becomes more complicated when you bring in things like AI that can create—that’s very, very persuasive—and is mimicking sort of the human side so well that people have difficulties differentiating, right?
So, for example, I run a sales team. And in sales, often people use generative AI to overcome objections. That is a great usage of generative AI. However, where do you draw the line between that—between persuasion and manipulation—and between manipulation and fraud, right?
I don’t think we need technology to help us draw the line. I think internally, you have to know that as a business. And you have to train your employees to know where the line is, right?
Ethics is always going to be something that the law can’t quite contain. The law is always what’s legal, and it’s sort of the bottom of the ethics barrel, in my opinion, right? So ethics is always a higher calling.
So having that view towards what is the use of ethical or accountable or responsible AI in your organization—having guardrails around it, writing up use cases, doing the training, having policies around what does that look like in our industry.
In many industries, transparency is going to be a very big factor, right? Do people know and do they want to know when they’re talking to a human versus talking to generative AI, right?
So there’s customer expectations. There’s a level of consistency that you have to deliver in your use cases. And if the consistency varies too much, then you’re going to create mistrust, right?
There’s also bias in all of the data that every single company is working with. So being able to safeguard against that.
So there are key elements of trust that are foundationally the same, but I think generative AI adds in a layer of complexity. And companies are going to be challenged to really understand: how have they built trust in the past, and can they continue to capitalize and differentiate that?
And those that are rushing to use generative AI use cases that then have the byproduct of eroding trust—including trust from their own employees—that’s where you see a lot of the backlash and problems.
So it pays to really think through some of these things, right? Where are you deploying use cases that’s going to engender credibility and trust? And where are you deploying use cases that may seem like it’s a short-term gain—until a bad actor or a misuse or something happens on the internet?
Which now, with deepfakes, it’s very easy to do. Now your reputation is very brittle because you don’t have a good foundational understanding of: do you have the credibility of your customers, of employees, that they trust, that you know what to do on what’s right, and then you can lead them there.
Ross: Yeah, that’s obviously—in essence—trust can be well-placed or misplaced. And generally, people do have a pretty good idea of whether people, technology, institutions are trustworthy or not.
And especially the trustworthiness is ultimately reflected in people’s attitudes and ultimately that which flows through to business outcomes.
So I think the key here is that you have to come from the right place. So having the ethical framework—that will come through. That will be visible. People will respond to it.
And ultimately, customers will go to those organizations that are truly trustworthy, as opposed to those that pretend to be trustworthy.
MJ: And I think there’s—and I think trust is about—there’s a time dimension here. There’s a time dimension with any technology, which is: you have to do things consistently, right?
Integrity is not a one-day game. It’s a marathon. It’s not a sprint. And so if you continue to be consistent, you can explain yourself when you make mistakes, right?
You know how to own up to it. You know what to say. You know how to explain it to people in a real way that they can understand.
That’s where trust—which is hard—trust is earned over time, and it can be depleted very quickly. And I think many, many companies have been burned through not understanding that.
But overall, it is still about doing the right thing consistently for the majority of the time and owning up to mistakes.
And to the extent that having an ethical AI framework and policy can help you be better at that, then I think those use cases and organizations and companies will be more successful.
And to the extent that you’re using it and it creates this downstream effect of eroding that trust, then it is extremely hard to rebuild that again.
Ross: Which takes us to leadership and leadership development. Of course, one foundation of leadership is integrity. There’s many things about leadership which aren’t changing. There are perhaps some aspects of leadership that are changing in a—what is—a very, very fast-moving world.
So what are your thoughts around how it is we can develop effective leaders, be they young or not so young, into ones that can be effective in this pretty, pretty wild world we live in?
MJ: I think with leadership, as it is, always a journey, right? There’s two things that in my mind leadership sort of comes back to. One is experience, right? And the other is the dimension we already mentioned, which is time.
As a leader, first of all, I encourage all senior leaders of companies—people who are in the highest seats of the companies—to really get in the weeds involved with generative AI. Don’t outsource that to other people. Don’t give it to your youngest employees. Don’t give it to third-party vendors. Really engage with this tool.
Because they actually have the experience and the expertise to understand where it’s working and where it’s not working, right? You actually recognize what a good product looks like, what’s a good outcome, what seems like it’s not going to work.
A great marketing leader lives in the minds of their customers, right? So you’re going to know when it produces something which is like, this is not hitting the voice, this is not speaking with my customers, I’m going to continue to train and work. A new marketing analyst is not going to have any idea, right?
And also as a great leader, once you actually get into the guts of these tools and start to learn with it, then it is, as we mentioned before, your role to think about:
How do I create the strategy around where I’m going to augment my company—the growth, the business, the profit, and the people? What am I going to put in place to help foster that curiosity? Where am I going to allow for use cases to break those constraints, to create this hybrid model where both AI can be used and humans can be more successful?
What does being more successful mean outside of just making more money, right? Because there’s a lot of ways to make more money, especially in the short term. So defining that after having learned about the tool—that’s really the challenge that every leader is going to face.
You have this vastly changing landscape. You have more complexity than you’re dealing with, right? You have people whose identities are very much shaped by technology and who are dealing with their own self-worth with respect to these tools.
Now you have to come in and be a leader and address all of these dimensions. And exactly what you mentioned before, this idea of being a multidimensional leader is starting to become very important, right?
You can’t just say, “I’m going to take the company to this.” Now I have to think about: how do I do it in a way that’s responsible? And how do I do it in a way that guarantees long-term success for all of the stakeholders that are involved?
These questions have never really changed for leadership, but they certainly take on a new challenge when it comes to these tools that are coming in.
So making strategic decisions, envisioning the future, doing scenario planning, using your imagination—and most of all, having a level of humility—is really important here.
Because this idea of being able to predict the future, settle into it, and charge in—really, that looks fun on paper. That’s very flashy. And I understand there’s lots of press releases, that’s a great story.
The better story is someone who journals, takes time, really thinks about what this means, and recognizes that they don’t know everything. And we are all learning. We’re all learning. There’s going to be really interesting things that come up, and there’s going to be new challenges that come up.
But isn’t that what makes leadership so exciting, though, right? If everyone could do it, then that would be easy, right?
This is the hard thing. I want leaders to go and do the hard thing, because that’s what makes it amazing. And that’s what makes AI suitable for you. It’s supposed to free up your constraint and help you do harder, more difficult things—take on more challenges, right?
And that’s where I think we can truly all augment ourselves.
Ross: Yes, it is. Any effective leader is on a path of personal growth. They are becoming more. Otherwise, they would not be fulfilling the potential of the people in the organization they lead—let alone themselves, right?
So to round out, what are a few recommendations or suggestions to listeners around how they can help augment themselves or their organizations—and grow into this world of more possibilities than ever before?
MJ: Yeah. So my best advice is asking people to separate the product from the person, right? You can use AI to create a better product, but in doing so, understand—is that making you a better person, right? Is that making you better at the thing that you actually want to do?
We know about people actually having to understand the product. But even so—if your goal is to be a better writer, for example, and you use Generative AI to create beautiful pieces—is that helping you be a better writer?
Because if it’s not, that may not be the best use case. Maybe you use it for idea generation or for copy editing. So being able to separate that and really understanding that is going to be important.
The other thing is: understand what parts of your identity you really value, that you want to protect, right? And don’t then use these tools that are going to slowly chip away at that identity. Really challenge yourself.
The thing about AI—until we get to AGI—that is interesting is that it is always going to validate you. It is always going to support what you want it to do. You’re going to give it data, and it’s going to do what you tell it to do. So it’s not going to challenge you, right?
It’s not going to make you better by calling you out on stuff that your friends would—unless you prompt it, right? Unless you say, “Critique how I can be better. Help me think through how I can be better.”
And using it in that way is going to help you be a better leader. It’s going to help you be a better writer, right? So making sure that you’re saving room to say, “Hey, yes, I’m talking to this machine,” but using it to make you better—and separating the product you’re going to create and the person you want to become.
Because no one is going to help you be a better person unless you really want to make an effort to do that. And so that, I think, is really key—both in your professional and personal life—to say:
What are the goals I really want to attain professionally and personally? I’m going to really keep my eye on that. And how do I make sure that I use AI in a way that’s going to help me get there—and also not use it in a way that doesn’t help me get there?
Ross: I think that’s really, really important, and not everyone recognizes that. That yes—how do we use this to make me better? Better at what I do? Better person?
And without intent, you can achieve it. So that’s very important.
So where can people follow you and your work?
MJ: Well, I post a lot on LinkedIn, so you should always look me up on LinkedIn.
I do work for Credibly, and we recently launched a credibly.ai webpage where we constantly are telling stories about what we’re doing.
But I’m very passionate about this stuff, and I love to talk to people about it. So if you just look me up on LinkedIn and connect with me and want to get into a dialog, I’m more than happy to just share ideas.
I do think this is one of the most interesting, seismic shifts in our society. But I’m a big believer in its ability—when managed correctly—to unlock more human potential.
Ross: Fantastic. Thank you so much for your time, your insight, and your very positive energy and how we can create the future.
MJ: Thanks, Ross.
Podcast: Play in new window | Download