Community IT Innovators Nonprofit Technology Topics
Community IT offers free webinars monthly to promote learning within our nonprofit technology community. Our podcast is appropriate for a varied level of technology expertise. Community IT is vendor-agnostic and our webinars cover a range of topics and discussions. Something on your mind you don’t see covered here? Contact us to suggest a topic! http://www.communityit.com
Community IT Innovators Nonprofit Technology Topics
Nonprofit AI: Pentagon and AI, Turn Off Model Sharing, Q&A
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this Nonprofit AI Podcast, Carolyn explores the complex intersection of nonprofit values and AI vendor ethics. Following a high-profile public dispute between the Pentagon and major AI providers, we look at what these corporate decisions mean for organizations that prioritize mission-aligned technology.
The conversation covers the practical side of AI safety, moving beyond the headlines to answer urgent questions from our recent webinar. Carolyn discusses:
- The ethical ripple effects of the Anthropic and OpenAI rivalry regarding government contracts.
- Why enterprise-level licenses are the primary recommendation for protecting sensitive nonprofit data.
- How to navigate privacy when using AI for board meeting transcriptions and note-taking.
- Practical steps to turn off model training in freemium tools like ChatGPT and Gemini.
- The existential question: Is adopting AI truly inevitable for the nonprofit sector?
As AI continues to disrupt education, health, and environmental sectors, Carolyn discusses the importance of intentionality—whether your organization chooses to opt in or opt out.
Resources Mentioned:
- Community IT Subreddit Webinar Q&A: https://www.reddit.com/r/NonprofitITManagement/comments/1rekaqk/qa_how_to_use_ai_tools_safely_at_nonprofits/
- LinkedIn Guide: Turning off AI model training from Kim Snyder, AI for Nonprofits Trainer at Tech Soup and Meet the Moment
- Upcoming: Part two of our AI safety webinar series with Matt Eshelman. Full video here: https://communityit.com/webinar-how-to-use-ai-tools-safely-at-nonprofits/
_______________________________
Start a conversation :)
- Register to attend a webinar in real time, and find all past transcripts at https://communityit.com/webinars/
- email Carolyn at cwoodard@communityit.com
- on LinkedIn
Thanks for listening.
Hello, and welcome to the Community IT Innovators Nonprofit AI Midweek Podcast. My name is Carolyn Woodard. I'm your host, and I'll start as I do every week with the disclaimer that I'm not an AI expert. No one is right now. We're all kind of feeling our way. So please come along with me and we'll get smarter together specifically about nonprofit and AI.
Carolyn WoodardAnd today I'm gonna start with a news story that maybe speaks directly to nonprofits that are feeling concerned and anxious about using AI and how that uh is living their values or aligns with values-oriented organizations, um, the ethics of it, and how hard it can be to find out the ethics of the big company that is making the AI tool that you're thinking about using or already using. And so
Carolyn Woodardlast week there was this story that's kind of been eclipsed by news events now, but um at the end of last week there was this very public spat, I guess you would call it, between the Pentagon and Anthropic, which is a massive AI company that produces the tool Claude. And the Pentagon basically said they were not going to allow use of Claude by their staff and for programming because Anthropic was not going to allow them two crucial things that Anthropic then, of course, had their own press release about why this was a problem for them and went against their ethics as a company. But
Carolyn Woodardthe Pentagon wanted to use Anthropic AI for uh massive surveillance of Americans. And Anthropic said that went against their terms and conditions. And they also wanted to use it for fully autonomous weapons, which I just have to say, you know, growing up in the 70s, 80s, and 90s, did no one see Terminator?
Carolyn WoodardSo there was a backlash to the backlash, basically OpenAI, which is the company that started this all off with Chat GPT. Chat GPT is you know licensed to Microsoft. So if you're using Copilot, um, that's essentially ChatGPT, but the enterprise version. And uh OpenAI came in and said, well, we'll do it. So the Pentagon said, okay, sure. And it was such an odd story because OpenAI said that still goes against our policies, also, the massive surveillance and the fully autonomous weapons, but we can work with you on that. Uh
Carolyn Woodardno matter what happens, the Pentagon has said they uh made like a six-month window, I think, for getting everyone off of Claude and onto open AI. Um, so it's it's not clear what's going to happen. And of course, uh we're all very concerned with this, with the uh news of the um war in the Middle East that started right after this announcement. So it was kind of odd timing all around.
Carolyn WoodardBut then there was a follow-up story that uh nonprofits might be interested in that the downloads of Anthropics Claude then surged significantly following that public dispute with the Pentagon. They hit the number one spot on the Apple App Store surpassing Open AI's Chat GPT downloads. And it's, you know, you don't really know, but one can assume that that increase was driven by that backlash to the story that uh Claude did not want to be used by the Pentagon in those two crucial ways, and uh that OpenAI had said they were fine with it.
Carolyn WoodardSo it's an interesting story all around. Kind of points to, we've talked on this podcast a couple of times about the ethics, um, the concerns with AI tools, the difficulty in finding out even what your privacy rights and uh authorship rights are if you're using these AI tools. We get asked all the time which AI tool is better for nonprofits, which AI tools are more have more safeguards in place for your privacy.
Carolyn WoodardAnd so this is just another question of it's not a set it and forget it. You can't say, well, we've done our research, we're going with Open AI, they seem like a pretty good company that is in alignment with our values, and then the next day there's a story like this. So unfortunately, it's something that it's changing so quickly, and uh these companies are massive. They have lots of contracts. It's not like Claude doesn't have or Anthropic doesn't have any other Pentagon contracts or any other maybe questionable contracts of what they're doing with other giant uh technology companies, but it's just something that nonprofits have to take into account when we're sifting through all of these data points and making our decisions about what vendors we're going to use.
Carolyn WoodardAs I think I said in a previous podcast, uh, whenever you think about AI, remember that you are the object of billion-dollar advertising campaigns. And this is clearly one of those moments where the comms team was saying, oh, we're taking a principled stand against the Pentagon. And the OpenAI was like, we're gonna jump in there and do what needs to be done. And so they were both trying to make their case and just had outcomes, real world outcomes. So just a little story to keep on top of as a nonprofit interested in AI. Uh
Carolyn Woodardas you may also know, we did a webinar last week on using nonprofit, nonprofits using AI tools safely and talked a lot about the difference between the enterprise-wide versions, paid versions of different levels, tiers, and the what's called the freemium versions. I've been coming up calling them public, uh, public AI, but it's not really true. They're not produced by the government or available to everyone as a public good. So
Carolyn Woodardthe more better term is freemium. It's uh loss leader, so they're provided for free for now. When you're using a free tool, as you probably already know, your data, your use, your information is how you're paying for it. They want to get that from you, so they are making it free for you. So,
Carolyn Woodardin general, in our webinar, our highly recommended action is to pay for licenses, pay for enterprise version if you can afford it, make the case to your leadership around the budget that it's important that you have those protections, uh, that you have the uh increased functionality, that you have accountability, and that when you're using the paid version, you'll see underneath the tool that you're using that that your inputs and outputs are not used to train their model. So your information, your queries, if you're using it on uh information that you have, databases that you have, that is not then being shared publicly.
Carolyn WoodardWhereas if you go to the freemium Chat GPT, your inquiries are being used, uh, what it outputs for you are being used to train the model. So be very careful about the documents and the inquiries that you're making to these freemium versions. So
Carolyn Woodardwe had asked after the webinar, we it was an hour, we barely had time to answer uh some of the questions that came in at registration. We answered some of the questions live in the webinar. So we have a Reddit stream thread where we are talking about and taking, trying to answer all these questions. It's under R slash nonprofit IT management, and you'll see it, it's right up at the top. It's QA on how to use AI tools safely at nonprofits. So
Carolyn WoodardI thought I'd take a couple of those questions here and um let you know. So
Carolyn Woodardone question that came up, a couple different people asked it as are there any meeting note-taking tools for board meetings that are safe and secure? And kind of a general rule is that if you're using Zoom or Teams that you're licensed to use, the AI tool that is transcribing and taking notes and maybe giving you a summary within the Zoom corporate, you know, company-owned Zoom account that you're using is going to be using those same terms and conditions, that it's only private, it's keeping that transcription and it's not sending it out to learn to the model to learn from. You do need to have a look, like just like everything, make sure and verify that the terms and conditions are that you have the same terms and conditions for AI that is used in the tool that you have when you're using Zoom, that's not listening in. Um, so it's not the case that every conversation needs to be described. Um, tools like Copilot and Zoom AI companion have strong privacy policies.
Carolyn WoodardI think you want to make sure that you are talking about this openly and transparently with the board. So if you're using a transcription or an AI tool to summarize, you need to make sure that everyone in the meeting is okay with that. If you're discussing something that's very sensitive, you might want to turn it off. Uh, you probably still want to have a secretary that's taking the their own summary and publishing that if that is something that your board does to generate the final minutes for the approval. Um, so kind of a blend. Uh, use human uh humans for what humans can do, uh, have a look at those privacy policies. But yes, you make sure that everyone is okay with it. And if they are okay with it, then you probably can use an internal AI tool. Or
Carolyn Woodardbe very cautious about using a third-party tool like OtterAI or some of the other tools that you can use for transcriptions. And just again, when you're using those tools, make sure that everyone in the meeting is okay with you using a tool like that. Um,
Carolyn Woodardsomeone asked, a couple of different people asked, even within Google Workspace, how careful do I need to be about data and sharing contact information? And someone said, Um, how can I feel like I'm not breaking policy by using AI? So
Carolyn Woodardyou need to have a policy. So a policy about what you expect people to use and not use uh within your work tools. So you can use any secure platform insecurely. Like you could put your social security number in your team's chat, and then everyone that has access to that chat will know your social security number. So
Carolyn Woodardit really is a combination of within your company enterprise tool, you're gonna have those privacy protections from that company. It's in their interest for you not to be upset that they leaked your information. Um, but within your staff, you do need to make sure do you have permission to use like children's photographs? Like that might go against your policy. It might be possible for you up to upload a photograph, but it goes against your company policies, your organization policies.
Carolyn WoodardDo your staff know what your policies are? This may be something in the age of AI that we need to reiterate over and over. Maybe quarterly you need to remind people what the policy is, maybe more frequently than that. Uh how are you protecting data? How are you keeping yourselves and others safe? What is the type of contact information that you can share? What are the types of personally identifying information that you don't share with other staff or publicly?
Carolyn WoodardSo just making sure that you have a policy that describes what you can do is really important.
Carolyn WoodardAnd then making sure that everyone knows what the policy is. Um,
Carolyn Woodardhere's another quick question: Are there any safe ways to use free tools? So it is true that nonprofits are always looking for bargain or cheaper tools to use. Uh, and even within the paid tools, there are different tiers. You can use, you can have a basic tool, basic copilot or Gemini license, which might be $20 per person per month. That might be a lot for your nonprofit. So you've got to weigh all of these options. And yes, there are free tools that are available.
Carolyn WoodardThere are ways to turn off the model sharing. And once you know the pattern, it is almost the same everywhere. I'm gonna shamelessly steal this from a colleague Kim Snyder who posted this on LinkedIn. I'll share that link with you in the show notes. Every major AI tool does give you the option to turn off the model training. You should go to your account, you should look under settings, look under privacy and data controls. It might be something different on a different platform. It might be data controls, privacy, account privacy. But somewhere in there under your account settings, you're gonna find privacy. And what you're looking for is the model training. So something that says model training on it, you can turn that on or off.
Carolyn WoodardA word of caution. If you're using a freemium version of Chat GPT, for example, when you turn off that model sharing, model learning sharing, it is gonna impact your functionality. So you may find that there are things that you can't do anymore. Um, so that's a trade-off. You're gonna have to decide is that a trade-off you're willing to make.
Carolyn WoodardThere are some other uh data privacy settings that you want to have a look at. You can look at data retention settings. So, how long is that tool keeping your chats? And you could restrict that. You might say just keep this chat for today.
Carolyn WoodardThat's another thing that you can do in the freemium versions, is you can open a temporary chat in Gemini and in, I can't remember what it's called in ChatGPT. You could also open it in a private window. So in your browser, use a private window in Chrome or Safari or uh Edge or you know, whatever you're using to have that private window and then use that for your chat. Again, it's there are trade-offs. You gotta think about it and you gotta think very carefully and know what your policy is for the types of documents that you might be able to upload to a freemium AI tool, the types of inputs that you would want to ask. So,
Carolyn Woodardfor example, if you have sensitive populations, if your nonprofit has sensitive data, like no matter how much you set the privacy settings on the freemium accounts, I would be very, very cautious and I would check your policy if you're allowed to do that. Because often, for example, working with children, like it's just not allowed at many places that you would upload anything about them into Chat GPT, for example. Um,
Carolyn Woodardyou can also look at your chat history control. You can decide how long your old conversations are stored. This is again a functionality issue. If you are using a particular tool over and over, you can go back to your old prompts and reuse them. You can ask the AI to look at an old conversation that you had and redo it or use that same information for a new prompt. So you're gonna lose that if you decide not to keep the chat or or to restrict what that freemium tool knows about you. Um, but those are trade-offs that you have to decide.
Carolyn WoodardIf you are using AI through your organization's Microsoft 365 or Google Workspace account, Copilot in Word, for example, Gemini and Google Docs, you are largely going to be covered. So you're going to have that privacy through that enterprise license agreement.
Carolyn WoodardIt gets a little bit tricky on some of the low cost. So there's a tier between the completely free freemium and the, for example, ChatGPT has the Go plan, which is $8 a month at the moment. And by default, they use your conversations in that ChatGPT Go plan to personalize ads. You might want to turn off the model training toggle under settings and data controls and turn off ad personalization. So again, it's just how much privacy you need. There are ways to do it. Uh, if that's important to you that you have a very low-cost version, you know, you can uh ask ask AI the best ways to turn on and turn off those privacy settings, but definitely something to consider, especially as I said, if you're working with sensitive data, if you're working with private data to your nonprofit, like you don't want somebody to know your board minutes, or you have other uh sensitive documents that you just should not be sharing publicly, then you should not be sharing them publicly.
Carolyn WoodardKind of existential question that came out. A couple different people asked this type of a question in our webinar, which was Is AI really inevitable? There are so many downsides. Are nonprofits really going to be forced to use this technology? And I think it's worth thinking about.
Carolyn WoodardIt's the one of the reasons why your AI acceptable use policy and your AI use is a board-level, executive level, and staff level conversation that your nonprofit needs to be having. If you haven't started having that conversation, start asking the questions and start um asking how to make this a bigger conversation because it is going to impact and disrupt every community, every sector that we're working on, environment, working, education, health. You name it. If a nonprofit works in this area, it's going to be disrupted by AI. And the AI is going to have impacts on the communities and the constituents that you care about.
Carolyn WoodardBeyond just you using it as a productivity tool, it is going to be changing our sectors. It's going to be very disruptive. Five years from now, you know, education is going to be completely different. Environmental work is going to be completely different. So just know that and going into it, have these high-level conversations.
Carolyn WoodardThe investment of our technology tools in these AI technologies is very high. So those four big companies, the Amazon, the Microsoft, the Meta, and Google, they are betting on AI. So it will be in everything and it will be changing everything. So
Carolyn WoodardI think it's kind of a similar question to are you going to, you know, write a letter by hand and mail it to someone instead of emailing them? Are you going to, you know, pull, give, go get your water at a well? It's something that it's it's all around us. So
Carolyn Woodardif you're going to opt out, you need to be making that an intentional opt-out. If you're going to opt in, make that intentional as well.
Carolyn WoodardSomeone shared this example with me that I thought was a good analogy of uh, for example, social media. So you say the example is say you are a nonprofit that works with young women and health and body images. So are you going to post about your nonprofit on Instagram? In some ways, that's playing into this evil system that is really messing around with young women's body consciousness. On the other hand, that's where they're turning to for advice. So you kind of do want to be there. But as a nonprofit, you're going to make that decision intentionally and you're going to control that content, you're going to have policies around it, and you're going to be, you know, revisiting that policy and revisiting your campaigns on Instagram because as things change, as Instagram changes, as your work changes, you're going to need to be aware of that and intentional about it. And
Carolyn WoodardI think AI tools are very similar. You are going to, somebody said, all of the environmental organizations can use AI to figure out how to fight the data centers that are wrecking havoc on the local environment. So that's definitely something that to be aware of. But yeah,
Carolyn WoodardI think it's going to be very hard to disengage completely because all of the tools that you're using are going to be incorporating AI, already have been incorporating AI. And you're already using Microsoft tools, using Google tools, using devices that are using those rare earths and changing communities. So it really is how far off the grid are you going to be able to go?
Carolyn WoodardAnd again, if you're not at the table, all of these tools are going to evolve without the nonprofit and philanthropy being involved in what we want these tools to be able to do and the types of tools that we want to use. So knowing what's out there, making informed decisions as much as you can is really important in sh in guiding the marketplace as well for the tools that nonprofits want. So
Carolyn WoodardI think I'll leave it there for this edition of the Nonprofit AI podcast. Please join us again on Friday for the podcast, which will be part two from that webinar on using AI tools safely with Matt Eshelman. And you can join us over on Reddit, ask your questions under r slash nonprofit IT management. The community is getting going over there. So don't be shy if you You haven't used Reddit very much, it's very easy and we are a very chill, uh, happy community. So uh don't worry about making a mistake. Uh just go ahead and ask your questions there and we'll answer them as soon as we can, uh as well as we can, knowing that AI is a big, you know, there's a lot of unknowns right now. So uh until Friday, take care.