October 29, 2025

Beth Kanter on AI to augment nonprofits, Socratic dialogue, AI team charters, and using Taylor Swift’s pens (AC Ep20)

“I call it the AI sandwich. When we want to use augmentation, we’re always the bread and the LLM is the cheese in the middle.”

–Beth Kanter

Robert Scoble

About Beth Kanter

Beth Kanter is a leading speaker, consultant, and author on digital transformation in nonprofits, with over three decades experience and global demand for her keynotes and workshops. She has been named one of the most influential women in technology by Fast Company and was awarded the lifetime achievement in nonprofit technology from NTEN. She is author of The Happy Healthy Nonprofit and The Smart Nonprofit.

Website:

bethkanter.org

LinkedIn Profile:

Beth Kanter

Instagram Profile:

Beth Kanter

What you will learn

  • How technology, especially AI, can be leveraged to free up time and increase nonprofit impact
  • Strategies for reinvesting saved time into high-value human activities and relationship-building
  • A practical framework for collaborating with AI by identifying automation, augmentation, and human-only tasks
  • Techniques for using AI as a thinking partner—such as Socratic dialog and intentional reflection—to enhance learning
  • Best practices for intentional, mindful use of large language models to maximize human strengths and avoid cognitive offloading
  • Approaches for nonprofit fundraising using AI, including ethical personalization and improved donor communication
  • Risks like ‘work slop’ and actionable norms for productive AI collaboration within teams
  • Emerging human skills essential for the future of work in a humans-plus-AI organizational landscape

Episode Resources

Transcript

Ross Dawson: Beth, it is a delight to have you on the show.

Beth Kanter: Oh, it’s a delight to be here. I’ve admired your work for a really long time, so it’s really great to be able to have a conversation.

Ross Dawson: Well, very similarly, for the very, very long time that I’ve known of your work, you’ve always focused on how technologies can augment nonprofits. I’d just like to hear—well, I mean, the reason is obvious, but I’d like to know the why, and also, what is it that’s different about the application of technologies, including AI, to nonprofits?

Beth Kanter: So I think the why is, I mean, I’ve always—I’ve been working in the nonprofit sector for decades, and I didn’t start off as a techie. I kind of got into it accidentally a few decades ago, when I started on a project for the New York Foundation for the Arts to help artists get on the internet. I learned a lot about the internet and websites and all of that, and I really enjoyed translating that in a way that made it accessible to nonprofit leaders. So that’s sort of how I’ve run my career in the last number of decades: learn from the techies, translate it, make it more accessible, so people have fun and enjoy the exploration of adopting it.

And that’s what actually keeps me going. Whenever a new technology or something new comes out, it’s the ability to learn something and then turn around and teach it to others and share that learning. In terms of the most recent wave of new technology—AI—my sense is that with nonprofits, we have some that have barreled ahead, the early adopters doing a lot of cutting-edge work, but a lot of organizations are just at that they’re either really concerned about all of the potential bad things that can happen from the technology, and I think that traps them from moving forward, or others where there’s not a cohesive strategy around it, so there’s a lot of shadow use going on.

Then we have a smaller segment that is doing the training and trying to leverage it at an enterprise level. So I see organizations at these different stages, with a majority of them at the exploring or experimenting stage.

Ross Dawson: So, you know, going back to what you were saying about being a bit of a translator, I think that’s an extraordinarily valuable role—how do you take the ideas and make them accessible and palatable to your audience? But I think there’s an inspiration piece as well in the work that you do, inspiring people that this can be useful.

Beth Kanter: Yeah, to show—to keep people past their concerns. There’s a lot of folks, and this has been a constant theme for a number of decades. The technology changes, but the people stay the same, and the concerns are similar. It’s going to take a long time to learn it, I feel overwhelmed. I think AI adds an extra layer, because people are very aware, from reading the headlines, of some of the potential societal impacts, and people also have in their heads some of the science fiction we might have grown up with, like the evil robots.

So that’s always there—things like, “Oh, it’s going to take our jobs,” you name it. Usually, those concerns come from people who haven’t actually worked with the technology yet. So sometimes just even showing them what it can do and what it can’t do, and opening them up to the possibilities, really helps.

Ross Dawson: I want to come back to some of the specific applications in nonprofits, but you’ve been sharing a lot recently about how to use AI to think better, I suppose, is one way of framing it. We have, of course, the danger of cognitive offloading, where we just stick all of our thinking into the machine and stop thinking for ourselves, but also the potential to use AI to think better.

I want to dig pretty deep into that, because you have a lot of very specific advice on that. But perhaps start with the big framing around how it is we should be thinking about that.

Beth Kanter: Sure. The way I always start with keynotes is I ask a simple question: If you use AI and it can give your nonprofit back five hours of time—free up five hours of time—how would you strategically reinvest that time to get more impact, or maybe to learn something new? I use Slido and get these amazing word clouds about what people would learn, or they would develop relationships, or improve strategies, and so forth. I name that the “dividend of time,” and that’s how we need to think about adopting this technology.

Yes, it can help us automate some tasks and save time, but the most important thing is how we reinvest that saved time to get more impact. For every hour that a nonprofit saves with the use of AI, they should invest it in being a better human, or invest it in relationships with stakeholders.

Or, because our field is so overworked, maybe it’s stepping back and taking a break or carving out time for thinking of more innovative ideas. So the first thing I want people to think about is that dividend of time concept, and not just rush headfirst into, “Oh, it’s a productivity tool, and we can save time.”

The next thing I always like to get people to think about is that there are different ways we can collaborate with AI. I use a metaphor, and I actually have a fun image that I had ChatGPT cook up for me: there are three different cooks in the kitchen. We have the prep chef, who chops stuff or throws it into a Cuisinart—that’s like automation, because that saves time. Then we have the sous chef, whose job is tasting and making decisions to improve whatever you’re cooking. That’s a use case or way to collaborate with AI—augmentation, helping us think better. And the third is the family recipe, which is the tasks and workflows that are uniquely human, the different skills that only a human can do.

So I encourage nonprofits to think about whatever workflow they’re engaged with—whether it’s the fundraising team, the marketing team, or operations—to really think through their workflow and figure out what chef hat they’re wearing and what is the appropriate way to collaborate with AI.

Ross Dawson: So in that collaboration or augmentation piece, what are some specific techniques or approaches that people can use, or mindsets they can adopt, for ideation, decision making, framing issues, or developing ideas? What approaches do you think are useful?

Beth Kanter: One of the things I do when I’m training is—large language models, generative AI, are very flexible. It’s kind of like a Swiss army knife; you could use it for anything. Sometimes that’s the problem. So I like to have organizations think through: what’s a use case that can help you save time? What’s something that you’re doing now that’s a rote kind of task—maybe it’s reformatting a spreadsheet or helping you edit something?

Pick something that can save you some time, then block out time and preserve that saved time for something that can get your organization more impact. The next thing is to think about where in your workflow is something where you feel like you can learn something new or improve a skill—where your skills could flourish.

And then, where’s the spot where you need to think? I give them examples of different types of workflows, and we think about sorting them in those different ways. Then, get them to specifically take one of these ways of working—that is, to save time—and we’ll practice that.

Then another way of working, which is to learn something new, and teach them, maybe a prompt like, “I need to learn about this particular process. Give me five different podcasts that I should listen to in the right order,” or “What is the 80/20 approach to learning this particular skill?”

So it’s really helping people take a look at how they work and figuring out ways where they can insert a collaboration to save time, or a collaboration to learn something new.

Ross Dawson: What are ways that you use LLMs in your work?

Beth Kanter: I use them a lot, and I tend to stay on the—I never have them do tasks for me. I use it mostly as a thought partner, and I use it to do deep research—not only to scan and find things that I want to read related to what I’m learning, but also to help me think about it and reflect on it.

One of my favorite techniques is to share a link of something I’ve read and maybe summarize it a bit for the large language model, saying, “I found these things pretty interesting, and it kind of relates to my work in this way. Lead me through a Socratic dialog to help me take this reflection deeper.” Maybe I’ll spend 10 minutes in dialog with Claude or ChatGPT in the learn mode, and it always brings me to a new insight or something I haven’t thought of. It’s not that the generative AI came up with it; it just prompted me and asked me questions, and I was able to pull things from myself. I find that really magical.

Ross Dawson: So you just say, “Use a Socratic dialog on this material”?

Beth Kanter: Yeah, sometimes a Socratic dialog, or I might say what I think about it and ask it to argue with me. I’ll tell it, “You vehemently disagree. Now debate me on this.”

Ross Dawson: Yeah, yeah. I love the idea of using LLMs to challenge you. So I tend to not start with the LLM giving me stuff, but I start with giving the LLM stuff, and then say, “All right, tell me what’s missing. How can I improve this? What’s wrong with it?”

Beth Kanter: I call it the AI sandwich. When we want to use augmentation, we’re always the bread and the LLM is the cheese in the middle. You always want to do your own thinking. I take it one step further—I think with a pen and paper first.

Ross Dawson: Right. So, as you were alluding to before, one of the very big concerns, just over the last three to six months, has really risen—everyone sharing these things like “GPT makes you dumber,” and things to that effect, which I think is, in many ways, about how you use it. So you raise this idea of, “What can I learn? How can I learn it?” But more generally, how can we use LLMs to become smarter, more intelligent, better—not just when we use the tools, but also after we take them away?

Beth Kanter: That’s such a great question, and it’s one I’ve been thinking about a lot. I think the first thing we just discussed is a key practice: think for yourself first. Don’t automatically go to a large language model to ask for answers—start with something yourself.

I also think about how can I maximize my human, durable skills—the things that make me human: my thinking, my reflection, my adaptability. So things like, if I need to think about something, I go out for a walk first and think it through. I’ve also tried to approach it with a lot of intention, and I encourage people to think about what are human brain–only tasks, and actually write them up for yourself. Then, what are the tasks where you might start with your human brain and then turn to AI as a partner, so you have some examples for yourself that you can follow.

I encourage people to ask a couple of reflection questions to help them come up with this. Will doing this task myself strengthen my abilities I need for leadership, or is it something that I should collaborate with AI for? Does this task require my unique judgment or creativity, so I need to think about it first? Am I reaching for AI because I don’t want to think this through myself? Am I just being tired? I don’t want to use the word lazy, but maybe just being, “Oh, I don’t want to feel like thinking through this.” If you find yourself in that category, I think that’s a danger, because it’s very easy to slide into that, because the tools give you such easy answers if you ask them to provide just the answers.

So being really intentional with your own use cases—what’s human brain–only, what’s human brain–first, and then when do you go to AI? The other thing that’s also really important—I read this article. I’m not a Taylor Swift fan, but I am a pen addict, and I collect fountain pens and all kinds of pens. It was a story about how Taylor Swift has three different pens that she uses to write her songs: a fountain pen for reflective ballads, a glitter pen for bouncy pop tunes, and a quill for serious kinds of songs. She decides, if she wants to write a particular song, she’ll cue her brain by using a particular pen.

So that’s the thing I’ve started to train myself to do when I approach using this tool: what mode am I in, and remember that when I’m collaborating with AI. The other thing, too—all of the models, Claude, ChatGPT, Gemini, have all launched a guided learning or a study and learn mode, which prevents you from just getting answers. I use that as my default. I never use the tools in the other modes.

Ross Dawson: All right, so you’re always in study mode.

Beth Kanter: I’m always in study mode, except if I’m researching something, I might go into the deep research. The other thing that I’ve also done for myself is that with ChatGPT, because you can do it, I’ve put customized instructions in ChatGPT on how I’d like to learn and what my learning style is. One of the points that I’ve given it is: never give me an answer unless I’ve given you some raw material from myself first, unless I tell you to override it.

Because, honestly, occasionally there might be a routine—some email that I don’t need to go into study and learn mode to do that, I just want to do it quickly. That’s my “I’m switching pens,” but I can override it when I want to. But my default is making myself think first.

Ross Dawson: Very interesting. Not enough people use custom instructions, but I think they also need to have the ability to switch them, so we don’t have one standard custom instruction, but just a whole set of different ways in which we can use different modes. As you say, I think the Taylor Swift pens metaphor is really lovely.

Beth Kanter: Yeah, it is. It’s like, okay, is this some routine email thing? It’s okay to let it give you a first draft, and it’ll save you some time. It’s not like this routine email is something I need to deeply think about. But if I’m trying to master something or learn something, or I want to be able to talk about something intelligently, and I want to use ChatGPT as a learning partner, then I’m going to switch into study mode and be led through a Socratic dialog.

Ross Dawson: So, going back to some of the specific uses for it—you regularly run sessions for nonprofits on fundraising, and that’s quite a specific function and task. AI can be useful in a number of different aspects of that. So let’s just look at nonprofit fundraising. How can these tools—humans plus AI—be useful in that specific function?

Beth Kanter: If we step away from large language models and look at some of the predictive analytic tools that fundraisers use in conjunction with generative AI, it can help them. Instead of just segmenting their audience into two or three target groups and sending the same email pitch to a target group that might have 10,000 or 5,000 people, if they have the right data and the right tools, they can really customize the ask or the communication to different donors.

This is the kind of thing that would only be reserved for really large donors—the million-dollar donors—to get that extreme customization and care. But the tools allow fundraisers to treat everyone like a million-dollar donor, with more personalized communication. So that’s a really great way that fundraisers can get a lot of value from these tools.

Ross Dawson: So what would you be picking up in the profile—assuming the LLM generates the email, but they would use some kind of information about the individual or the foundation to customize it. What data might you have about the target?

Beth Kanter: You could have information on what appeals they’ve opened in the past, what kinds of specific campaigns they donated to. Depending on the donor level, there might even be specific notes in the database that the AI could draw from. There could be demographic information, giving history, interests—whatever data the organization is collecting.

Ross Dawson: So everything in the CRM. I guess one of the other interesting things, though, is that most people have enough public information about them—particularly foundations—that the LLM can just find that in the public web for decent customization.

Beth Kanter: Yeah, but there’s also, I think, a need to think a little bit about the ethics around that too. If it is publicly accessible, you don’t want to cross the line into using that information to manipulate them into donating. But having a more customized communications approach to the donor makes them feel special.

Ross Dawson: Well, it’s just being relevant. When we’re communicating with anybody on anything, we need to tailor our communication in the best way, based on what we know. But this does—one of the interesting things coming out of this is, how does AI change relationships? Obviously, we know somebody to whatever degree when we’re interacting with them, and we use that human knowledge. Now, as you say, there’s an ethical component there. If LLMs intermediate those relationships, then that’s a very different kind of relationship.

Beth Kanter: Yes, it shouldn’t replace the human connection. It should free up the time so the fundraiser can actually spend more time and have more connection with the donor. Another benefit is that AI can help organizations generate impact reports in almost real time and provide those to donors, instead of waiting and having a lag before they get their report on what their donation has done. I think that could be really powerful.

Ross Dawson: Yeah, absolutely. That’s proactive communication—showing how it is you’ve helped. That’s been a lot of legwork, and which that time can be reinvested in other useful ways.

Beth Kanter: Another example, especially with not so much smaller donors but maybe mid-size to higher donors: typically, organizations have portfolios of donors they have to manage, and it could be a couple hundred people. They have to figure out, “Who do I need to touch this week, and what kind of communication do I need to have with them? Is it time to take this person out to lunch? I’m planning a trip to another city and want to meet with as many donors as possible.” I think AI can really help the fundraiser organize their time and do some of the scanning and figuring out so the fundraiser can spend more FaceTime with the donor.

Ross Dawson: Yes, that’s the key thing—if we can move to a point where we’re able to put, as you say, that very first question you ask: What do you apply that time to? One of the best possible applications is more human-to-human interaction, be it with staff, colleagues, partners, donors, or people you are touching through your work.

Beth Kanter: Yeah, I think the other thing that’s really interesting—and I’m sure you’ve seen this, I know we’ve seen a lot in the Humans and AI community—is this whole idea around work slop.

And I think about that in terms of fundraising teams, especially with organizations that don’t have an overall strategy, where maybe somebody on the team is using it for a shortcut to generate a strategy, but it generates slop, and then it creates more of a burden for other people on the team to figure out what this is and rewrite it. That’s another reason to move away from thinking about AI as just a gumball machine where we put a quarter in and out comes a perfect gumball or perfect content.

Ross Dawson: That’s a great point. The idea of work slop—recent Harvard Business Review article—where the idea is that some people just use AI, generate an output, and then that slows down everything else because it’s not the quality it needs to be. So it’s net time consumption rather than saving. So in an organization, small or large, what can we do to make AI use constructive and useful, as opposed to potentially being work slop and creating a net burden?

Beth Kanter: I think this comes down to something that goes beyond an acceptable use policy. It gets down to what are our group or team norms around collaborating with each other and AI, and having some rituals. Maybe there’s a ritual around checking things, checking information to make sure it’s accurate, because we know these tools hallucinate—sort of find the thing that’s not true. Or maybe it’s having a group norm that we don’t just generate a draft and send it along; we always think first, collaborate to generate the draft, and then look at it before we send it off to somebody else.

And maybe having a session where we come up with a formal team charter around how we collaborate with this new collaborator.

Ross Dawson: Yes, I very much believe in giving teams the responsibility of working out for themselves how they work together, including with their new AI colleagues.

Beth Kanter: Yeah, and it’s kind of hard because some organizations just jump into the work. I see, especially the smaller ones that are more informal, even when they hear the word “team charter,” they think it’s too constricting or something. But I think this whole idea—what we’re talking about—is a bit of metacognition, of thinking about how we work before we do the work.

Ross Dawson: And while we do the work.

Beth Kanter: And while we do the work. Some people feel like it’s an extra step, especially when you’re resource constrained: “Why do I want to think through the work before we’re doing the work? We’ve got to get the work done. Why would we even pause while we’re doing the work to think about where we are with it?” So I think that skill of reflection in action is one of those skills we really need to hone in an AI age.

Ross Dawson: Yes, and an attitude. So to round out, what’s most exciting for you now? We’re almost at the end of 2025, we’ve come a long way, we’ve got some amazing tools, we’ve learned somewhat how to use them. So what excites you for the next phase?

Beth Kanter: I’m still really excited about how to use AI to stay sharp, because I think that’s going to be an ongoing skill. The thing I’m most excited about—and I’m hopeful organizations are going to start to get there in the nonprofit sector—is this whole idea around what are the new emerging skills, the human skills that we’re going to need to really be successful once we scale adoption of these tools. And then, how does that change the structure of our jobs, our team configurations, and the way that we collaborate? Those are the things that I’m really interested in seeing—where we go with this.

Ross Dawson: I absolutely believe that organizations—the best organizations—are going to look very different than the most traditional organizations of the past. If we move to a humans-plus-AI organization, it’s not about every human just using AI; it changes what the organization is. We have to reimagine that, and that’s going to be very different for every organization.

Beth Kanter: Yeah. So I’m really excited about maybe giving some practices that we’re doing now without the AI that aren’t working a funeral—a joyful funeral—and then really opening up and redesigning the way we’re working. That’s really exciting to me, because we’ve been so stuck, at least in the nonprofit sector, in our busyness and under pressure to get things done, that I think the promise of these tools is really to open up and reinvent the way we’re working. To be successful with the tools, you kind of have to do that.

Ross Dawson: Yes, absolutely. So Beth, where can people go to find out more about your work?

Beth Kanter: Well, I’m on LinkedIn, so you can find me on LinkedIn, and also at www.bethkanter.org.

Ross Dawson: Fabulous. Love your work. So good to finally have a conversation after all these years, and I will continue to learn from you as you share things.

Beth Kanter: Yes, and likewise. I’ve really enjoyed being in a community with you and enjoy reading everything you write.

Ross Dawson: Fantastic. Thank you.