
Community IT Innovators Nonprofit Technology Topics
Community IT offers free webinars monthly to promote learning within our nonprofit technology community. Our podcast is appropriate for a varied level of technology expertise. Community IT is vendor-agnostic and our webinars cover a range of topics and discussions. Something on your mind you don’t see covered here? Contact us to suggest a topic! http://www.communityit.com
Community IT Innovators Nonprofit Technology Topics
How to Nonprofit AI with Brenda Foster pt 1
Vanguard Communications’ Chief of Innovation Brenda Foster shared tips and practical advice on getting started using generative Artificial Intelligence AI tools at your nonprofit in a way that matches your mission and values.
Learn how to prompt, when and how to use AI tools, and when not to.
Learn how to evaluate the outputs and feel good using AI at your nonprofit.
In part 1, Brenda explains the various types of AI and walks through the ethical considerations and trade offs for the environment, community justice, human creativity, privacy and security, and bias. She presents a five question framework for creating your nonprofit AI policy. In part 2, Brenda explores good prompting and the differences between tools in this moment, and takes audience Q&A.
Are you wondering where to start with AI?
Chances are you and your colleagues are already using it for some things, and wondering how to use it better, or whether you should be using it at all. Your organization may be ambivalent or aghast at AI, have already embraced it, or be unsure where to start. You may have colleagues that are using AI for everything and others who won’t touch it.
Brenda Foster is a PRSA-NCC Hall of Fame inductee who has specialized in nonprofit communication for decades.
In this webinar, she shares tips and best practices on improving your AI prompts for communication success and explores situations where AI can improve the day-to-day job satisfaction for nonprofit staff. You can hear more from Brenda in our podcast discussion of AI tips here.
How can your nonprofit get started ?
In this webinar learn how to prompt, when and how to use AI tools, and when not to. Learn how to evaluate the output and ensure that your team feels confident and comfortable using AI to make their jobs more interesting and to better support your mission.
As with all our webinars, this presentation is appropriate for an audience of varied IT experience.
Community IT is proudly vendor-agnostic, and our webinars cover a range of topics and discussions. Webinars are never a sales pitch, always a way to share our knowledge with our community.
Learn how to create an AI Acceptable Use Policy here. The nonprofit sector is deeply concerned with ethics, accountability, the environment, and systemic change. Learn more about ethical AI frameworks here.
_______________________________
Start a conversation :)
- Register to attend a webinar in real time, and find all past transcripts at https://communityit.com/webinars/
- email Carolyn at cwoodard@communityit.com
- on LinkedIn
Thanks for listening.
Carolyn Woodard: Welcome to part one of a two-part series on the webinar, How to Nonprofit AI with Brenda Foster. You can find the full video on our site, communityit.com. You can see the slides, download the slides, and there’ll be a transcript there as well, or just listen to this part one and part two.
Welcome to this Community IT Webinar, How to Nonprofit AI with Brenda Foster. My name is Carolyn Woodard. I’m the Outreach Director for Community IT, and I’m the moderator today.
And I’m very happy to hear from our guest, Brenda. But first, I’m going to go over our learning objectives.
- We are going to try to understand the various types of AI, artificial intelligence.
- We had somebody in the registration who said, please don’t assume that we all know about AI already. So, we’re going to do some level setting and do some background.
- We’re going to define ethical AI and why it matters for nonprofits.
- We’re going to apply the five-question framework to determine alignment with your mission. We’re going to use proven prompting strategies to get evidence-based, factual responses.
- We’re going to identify next steps to implement ethical AI in your organization.
Introductions: How to Nonprofit AI
And I’m really excited to welcome Brenda Foster to this webinar. She’s going to lead us through a bunch of these things. So, Brenda, would you like to introduce yourself?
Brenda Foster: Sure thing. Thank you, Carolyn. Hi, everybody. I’m Brenda Foster, and I am the Chief of Innovation at Vanguard Communications. We are a full-service public relations firm, based in DC, Hispanic woman owned, and we’re a boutique. There’s about 20 of us. For the last 38 years, we’ve worked solely for causes, nonprofits, some government work. That’s our specialty. That’s our passion.
One of the things that is really important to us is to make sure that nonprofits are not behind the curve in any kind of innovation. When social media start to change, we do everything that we can to jump in and learn them so that we can then advise our nonprofit clients on how they can take advantage of them before they get behind. We want to make sure that everything that’s available to for-profit companies is available to nonprofits as well.
That’s what caused us to start an AI task force a couple of years ago to sort of figure out A, what’s going on with AI? What do they mean by AI?
And then we really looked at a lot of the benefits as well as some of the concerns. And what we ended up with was what I’ll talk about a lot today, which is ethical AI.
The idea that if we’re going to use it, we want to use it as ethically as possible and we know that our clients want us to as well.
What you’ll see here is a presentation that changes pretty often, and it changes because AI changes. And in fact, I have some interesting late-breaking articles that have come out in the past couple of days that I will talk about as we go through some of the trade-offs and things like that.
But anyhow, thank you very much for having me here. I appreciate it.
Carolyn Woodard: Awesome, Awesome.
And you just mentioned ethical AI and transparency, and I noticed that a couple of people in the chat already are using AI assistance to help take notes. And it has the automatic statement there so that everyone knows there are AI assistance working. So that is one of the things I know that we recommend.
My name is Carolyn Woodard. I am the Outreach and Director of Marketing for Community IT. Before I worked for Community IT, I was the IT Director at a very large international nonprofit. And then at a very small local nonprofit. I am living proof that you don’t need to have a huge technology background to manage IT at nonprofits. And that’s something that our organization does. I also have been dabbling in AI, but I am definitely not an expert in it. I’m really excited about this presentation today.
Before we begin, if you’re not familiar with Community IT, a little bit about us. We’re a 100% employee-owned managed services provider. We provide outsourced IT support. We work exclusively with nonprofit organizations, and our mission is to help nonprofits accomplish their missions through the effective use of technology. We are big fans of what well-managed IT does for your nonprofit. We serve nonprofits across the United States, and we’ve been doing this for over 20 years. In fact, our 25th year anniversary is next year. We are technology experts. We are consistently given the MSP recognition, MSP 501 recognition for being a top MSP managed services provider, which is an honor we received again in 2025.
We host a weekly podcast and a monthly free webinar series. You can access all of our previous webinar videos and transcripts on our website, communityit.com, and you can register for upcoming webinars there.
For these presentations, we are vendor agnostic. We only make recommendations to our clients and only based on their specific business needs. We never get a client into a product because we get a hidden incentive or benefit from that. But we do consider ourselves the best of breed IT provider. It’s our job to know the landscape, the tools that are available, reputable, and widely used. And we make recommendations on that basis for our clients based on their business needs, priorities, and budgets. I’m really excited today to talk about some of these tools.
We’re going to leave as much time as we can for Q&A. Please submit your questions through either the chat feature or the Q&A at any time today. I’ll either break in and ask them if they’re timely or we’ll save them for the end.
Anything we can’t get to, I’ll ask Brenda to give us some written thoughts, and I’ll append those to the transcript, which we’ll put up on our website in about a week or so. You can check back after the webinar if we didn’t get to your question.
We’re also going to be on Reddit right after the webinar answering questions. Brenda agreed to come over for 15 minutes or so, so you can join us there too after the webinar.
A little bit more about us; our mission, as I said, is to create value for the nonprofit sector through well-managed IT. We also identify four key values as employee owners that define our company, trust, knowledge, service, and balance.
We seek always to treat people with respect and fairness, to empower our staff, clients, and sector to understand and use technology effectively, to be helpful with our talents, and we recognize that the health of our communities is vital to our well-being and that work is only a part of our lives. I know things are really stressful in the nonprofit sector recently, maybe always. We hope that you’re taking care of yourselves too.
Poll: Comfort Level with AI Tools
And now we’re going to launch our first poll. And this is a poll about your comfort level with AI tools. Your options are:
- Completely uncomfortable and unfamiliar with most tools. And there’s no shame in this webinar. We’re all here together to learn. So, if that’s you, go ahead and fill that out. You’ve come to the right place if you check that.
- Second option is somewhat uncomfortable. We use a few popular tools occasionally.
- The third is neutral. We’re neither uncomfortable nor comfortable in kind of average use, what you would consider average use.
- The fourth option is somewhat comfortable. We use a few AI tools daily.
- Fifth option is completely comfortable. We use a lot of the tools a lot of the time. Our colleagues ask us how to teach them how.
- And then not applicable or other, if there’s something else that you can put in the chat, or if you’re in this webinar for some other reasons, you’re not really using AI yet.
And Brenda, can you see that?
Brenda Foster: I sure can. We’re all over the place here, which is totally fine. And I’m hoping that those of you who are completely comfortable can also contribute in the chat anything that you know. Because the one thing I will tell you is that I never would call myself an AI expert, okay? And I think that’s really important to say. I’m not an IT person in any way.
What I am is somebody who understands nonprofits and understands how AI can benefit them to this point. But I do not know every tool in America. I’ve had people say to me during conferences, what about this tool? And it was released yesterday. And I’m like, well, no, I haven’t had a chance to do that yet. If you know of a tool that I haven’t talked about, please share it with everybody.
These poll results are great. So hopefully, I’ll hit all of the places that you need me to go. From those of you who aren’t completely uncomfortable, all the way to completely comfortable.
Hopefully, there’ll be something that you can all learn here and that we can teach each other.
Carolyn Woodard: Sounds good. All right. Let’s get started.
AI Definitions
Brenda Foster: I want to do a little bit of level setting here. Because I know that it can be, we talk about AI a lot, and AI can mean so many different things.
If you like the old Steven Spielberg movie, AI to you is a little boy, a little fake boy. And maybe that’s coming. It’s sort of here in some cases.
The AI that I’m really going to be talking about today is generative AI. Let’s talk about the differences.
One of the things that people who want you to use AI, constantly say is, oh, come on now. You’ve been using it for years. The genie is out of the bottle. You can’t put it back. You have to have to use AI.
Well, yes and no. What they’re talking about for the most part is embedded AI. And those are tools that operate behind the scenes. They enhance the functionality of different things, for example your Zoom screen, figuring out where you should be and what it should look like with a background.
Any kind of enhancements for Google Maps, they are suggesting routes for you. Things like that have been around for a really long time, and they continue to grow. Look the algorithms.
We all talk about the algorithms. Everywhere are the algorithms. Those are a form of embedded AI.
What we’re talking about is AI that you personally have a little more control over, all right? So discriminative AI is also one, unless you’re an IT person, you’re probably not doing that. But it’s kind of exciting because that’s the kind of AI that really has the opportunity to transform processes and systems in a way that we haven’t seen before, and that could make things much more efficient. Now, is it scary? Okay, but all of this is scary. All of it’s scary.
Generative AI
What we’re going to talk about today is generative AI. So generative AI is the default right now, just like IBM, or we’re going to Xerox something. The default right now is ChatGPT. If you’re looking for an example of what generative AI is, these are models that are used to generate content, all right? They can be ChatGPT, Claude, Perplexity, DeepSeek, anything you can think of, but also Canva, Beautiful AI.
They can be used for tasks like creative writing and art, and audio synthesis, but they can also be used to analyze things and give you answers. They can be used to create chatbots that do a particular function that a person isn’t available to do. So today, that’s what we’re going to focus on, is the generative AI part.
AI Trade Offs for Nonprofits
But before we do that, because you are nonprofits and you care about the world, we are going to talk a little bit about those trade-offs I mentioned earlier, the ones that we spent a lot of time researching. Trade-offs are something that I think everybody in this world who’s trying to make decisions about AI worries about. I think we’ve learned a lot as technology has changed over the past three to four decades about the harm that technology can do to our environment, to our children. There are a lot of issues out there that technology brought. I love that we’re having this conversation right at the dawn of generative AI and how it progresses.
I want to talk about some of these trade-offs, and what we know as of today. And I say as of today, because literally I changed this slide yesterday based on a new report that came out.
First, we’re going to talk about the environment. It’s the one I think most people know about.
The other one is privacy. We’ll talk about that as well.
Environment
Environment use is something that people really have a concern about, and it’s reasonable. This is a heavy-duty system. Think about what it’s able to do. It needs a lot of energy, and to have energy, you have to cool, right? You have to have a cooling system.
What I will say is though, that as of yesterday, the per query water use of generative AI has declined substantially. Since January, it has declined by 40 times. Now, that’s the good news.
The bad news is usage of generative AI has increased by way more than that. So even though per query, you and I aren’t killing things, together, we’re making a giant mess, all right? 700 million GPT 4.0 queries per day translate into freshwater evaporation equivalent to the annual drinking water needs of 1.2 million people. That is not nothing. It’s going to get better.
I don’t have the link to the Washington Post article here but just look up ChatGPT in the Washington Post from yesterday. It was a really interesting article that talked about the things that are actually worse. I don’t know if this is going to make you feel better about yourself. It might make you feel better about AI. Netflix, streaming, they are all much worse. They’re actually the reason that the data centers are moving as fast as they do. They actually use much more energy.
The one that blew me away, not that I didn’t already know this was intensive, but you’re actually doing more harm when you eat a hamburger, because of the intensive way in which meat is grown and becomes a hamburger in this country. That’s not an excuse, but it is something to look at when you’re looking at the things to be aware of that this will keep getting better.
There’s real motivation to make it get better, mostly because look, people are fighting data centers right and left. They’re not going to be able to be everywhere. We do have to make this more efficient.
Community Justice
Okay. So, community justice. I want to talk to you about, we know the data centers are a problem, and unfortunately, it’s the same model we have continued to do throughout history.
We look at black, brown, and indigenous communities, working class communities, and we say, let’s put it there. So, we don’t want that. Now, at the same time, people like you, nonprofits, are starting to see that AI-powered tools can actually amplify their reach, right?
We are better able to translate than we were able to before, and many more languages than we were. Triage can be faster. Initial screenings can be faster.
Providing materials, again, in many languages can be faster. So, there are a lot of ways. What we want to do is get you using AI in a way that is so helpful to your community, that it outweighs some of these other things.
And also, you want to protest. We’re going to talk about that, but we don’t want those data centers there.
Human Creativity
Okay, the evaluation of human creativity.
As a writer, this is really important to me, and even myself, I have to check. My first thought when I start writing now, is to see what Chad GPT thinks, and I have to take myself away from that and go, what is wrong with you? You were really good at this. Don’t make yourself bad at this.
The issue is that it really can replicate human styles without consent. I had a camera operator send me an audio of something his client asked him to do, that was Meryl Streep’s voice, but AI.
And they’re going to use it. And I have now seen many commercials where you’ve got celebrity-like voices, and we don’t like that. We certainly do not want to lose our artists, our designers, our writers.
I mean, we’re humans and you cannot replace human creativity.
At the same time, even those of us who are creative are finding that some of the tasks that are not our strong point, AI can take care of those so that we can be left to do the creative work.
Privacy, Intellectual Property, and Security
And now let’s look at the privacy, intellectual property, and security. We know that there are issues with privacy. I saw a lot of the questions that you guys posed before we got here today, and there were a lot about being safe and being private. We’re going to talk about that.
Look, we’re going to always have more vulnerabilities. But again, because the industry knows that’s important to us, they are going to race to try to make it as private as possible. And those that are doing it well will tout that and we’ll be able to recognize them.
Bias
The other thing is the bias, right? AI concerns around bias.
We do know when Kodak was invented, that’s when the bias started when it comes to imagery, right? Kodak learned to develop its film on white faces, and it’s still not very good at it (with non-white faces.) And frankly, even our cell phones are not great at it, because that’s how photography was invented.
AI is the same issue, right? When it’s invented by a certain person, it’s going to only reflect that person’s way of looking at things. There’s a lot of concerns about wrongful arrests and systemic discrimination.
At the same time, AI and hiring can actually be very helpful, because as humans, we have unconscious bias. And AI looks at something and says, you want these five skills, this person has these five skills. You’re not worried about whether or not they speak a different language, where they live, how old they are, any of those things, it’s saying, okay, let’s start with this.
We found that it really can improve diversity by up to 25 percent because it’s reducing that human bias. So, I don’t know if you feel better or worse, if you’re like, never will I use AI because of all those problems. Hopefully, you feel like it is a bunch of trade-offs.
Ethical AI
If we’re going to practice ethical AI, and I want to suggest that you check out Joy Buolamwini, I get that wrong every time, or it’s a very, very hard word. She has written a book called Unmasking AI. She also has a Netflix series of, I think it’s a documentary, like a several-part documentary, about AI and all of these biases we’re talking about. Coded Bias, if not available on Netflix try PBS Independent Lens https://www.pbs.org/independentlens/documentaries/coded-bias/
But even she says, look, we’re not going to get away from using it. We really just have to say, if it’s going to help us, or if it’s going to make us much more unequal, it’s ultimately up to us. And so that’s why diving in is not a bad idea, because the more we learn about it, the more we can use it for good.
Carolyn Woodard: Thank you for running us through these. I feel like each of those slides could be its own webinar of all those different ethical issues that you raised, but I love how you just cut right through it and gave us some examples.
Poll: Does Your Nonprofit Have a Generative AI Policy Already?
So now we have another poll for you in the audience. And this is, does your nonprofit have a generative AI policy already?
The answers you could choose are yes, I don’t know, no, but we’re working on one, no and I don’t think we’ll create one anytime soon, or that fifth option, not applicable or other.
We’re going to talk a little bit about what that policy should cover.
And just to reassure everybody, we are going to get into some ways to do prompting and some really practical stuff. But we did want to make sure that we were meeting everybody where you are and talking about some of these larger issues.
So, does your nonprofit have a generative AI policy?
Brenda Foster: Wow, okay. I will say again, my first presentation this year in January, it was completely no yeses, zero yeses. You guys have made a lot of progress in eight months, right?
Very, very good. That’s 16% is not bad. Working on one, nearly half of you, that’s great. I mean, it’s not something that just comes right out of thin air. You really do have to think of all of the issues. I don’t think we’ll create one anytime soon.
That says to me a little bit of helplessness, right? Like that maybe you don’t necessarily have control over whether you have one or not, which can be very difficult. I’m hoping by the end of this, you will feel like you have the things that you need in order to help that decision along.
Carolyn Woodard: All right. Thank you, everyone, for answering those questions. That’s really helpful to us.
What Should Your Nonprofit Generative AI Policy Include?
Brenda Foster: All right. So, what we’re going to do, this is a set of questions that we’re going to go through that will help you figure out what your policy should be. The reason that we do this is because every single nonprofit cares about some things more than others, and they’re all going to be different. This is about the questions you need to ask, not about the answers you should have to those questions.
All right, let’s go ahead.
Who is impacted if we use this tool?
The first one you want to take a look at is who is impacted if we use this tool? Are there trained professionals that will lose their job? That’s a bad side.
The good side is, can it make the job more creative, not just more efficient? If you’ve got some entry-level employees, could they be doing something other than monthly reports? That’d be so amazing, right?
Is it going to displace community labor or individual storytelling? Is it going to take that voice away? The biggest thing you want to ask is who gains power or time and who loses income on autonomy or voice?
So that’s that first human impact question.
Who is left out if we use this tool?
Then the next question is who is left out if we use this tool, all right? Are there biases baked into the tools that we’re thinking of using?
Do these tools reflect the full range of disability experiences that may be, you know, in the membership that we serve? Could this tool actually bridge gaps for you that you’re not covering right now with the staff that you have? Or will it widen them?
Just, you know, and I always think about this digital divide. This has an opportunity. This technology has the opportunity to wipe that out.
But only if we make sure that that’s where we prioritize its use, is in those communities that need it the most.
How does this tool help us pursue our mission?
All right. Third question.
How does this tool help us pursue our mission? And this, this is key and it’s why this needs to be so individualized. Because there are environmental organizations, nonprofits out there that are going to have a very, very strict either no AI or one particular kind that they approve AI policy. They just are. That’s because their mission is to protect the planet. And why would you willingly use tools like that? That’s why people are vegan. That’s why there are all kinds of things.
So that is the most important piece to ask yourself. Within our mission, does AI make sense and how does using it make sense? Is it going to disconnect us from the people that we serve? Are we going to be able to expand our reach?
I worked for a housing organization, talked to a bunch of housing professionals earlier this year. And we came up with this idea of, they’re not available after five o’clock to public housing residents. And imagine how it feels from five to whatever happens the next day, to not have anybody to talk to or just leave a voicemail.
If you could come up with a chatbot that had the ability to answer basic questions, it’s not like you’re going to hire a new person anyhow. You’re not taking anybody’s job, but you now have found a way to help your residents get a little bit more peace of mind. These are the kinds of things you have to ask yourself.
Does it uplift your mission or is it going to quietly shift your priorities even silently in the background?
What do my employees need to maximize tool use?
All right. So next is, what do my employees need to maximize tool use?
And we found a lot. Our rollout of our first policy was early last year. And we are still working with employees, having meetings together. We have cohorts now for each of the different properties to help people use better prompts and things like that.
You need to help them understand, if they have concerns, where are they going to go with those? When a new tool comes out, is it totally fine to use it or should they say, hey, this is something I’d like to use, can we investigate it? What is it that employees need to know in order to use this responsibly?
And the way that you want it in accordance with your mission.
What are the ultimate risks and harms?
And then finally, what are the ultimate risks and harms? All right.
Now, this is true. You’re going to have to look at every tool differently. But we, with this, once we figured out what our norms were, we came up with a checklist for every new tool.
You can use this to come up with a checklist.
How is our data handled and by whom?
What are you doing about bias and accuracy?
What about copyright and consent with the trading data? Most of the major tools have a way for you to say, no, you don’t want to train it, even free ones. Right. You can tell the tool in settings that you don’t want to use your data for training.
Are we, do we know that we are using a piece of tech that has actively tried to harm communities? Right.
And does it align with our values, or does it undermine them?
This is the kind of research that you need to do every time you try to bring on a new tool.
And again, if you just generally assess these questions for yourself, you should be able to come up with a checklist. It’s a little bit easier to check off as a new tool comes in. But the reason you want to do this is because otherwise your employees are using this stuff anyhow. It’s the Wild West out there. And if you can give them some guidance and also some reasons why certain tools aren’t great, then that’s even better.
Carolyn Woodard: I just want to jump in for two seconds. This is so interesting, and this is something that a lot of our nonprofit clients are asking us too. And it can be so hard to do the research on the company, on the tool.
If you’re suffering that, feeling that, we feel you, we understand it. And I just wanted to mention a colleague a couple of days ago was saying that he uses AI to help understand what their terms of agreement are.
Brenda Foster: That’s exactly right.
Carolyn Woodard: “Can you summarize this in plain language for me? What is your privacy policy?” that sort of thing.
Brenda Foster: Yeah, that’s exactly right. The tools are actually, you have to tell the tools not to lie, and I’ll tell you about that in a little bit. But you can ask each tool, every one of these questions, you can ask one tool this question across all of them, right?
Give me a comparison and they’ll do what they can. So, yes, use AI to assess AI, why not? Great, Carolyn, I love it.