Community IT Innovators Nonprofit Technology Topics

Nonprofit AI: Implementation Framework, AI Literacy

Community IT Innovators Season 7 Episode 16

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 30:46

Resources shared in this episode: 

Gallop Poll January 2026 on AI use: https://apnews.com/article/ai-workplace-gemini-chatgpt-poll-4934bc61d039508db32bc49f85d63d99

Build Consulting 5 Category AI Implementation Framework by Kyle Haines: https://buildconsulting.com/blog/a-strategic-framework-for-nonprofit-ai-investment/

1: Return on Investment - what are you trying to do, and is an AI tool the best way to do it? 

2. Technical and Data Feasibility - are you ready? Is your data ready? 

3. Mitigating AI Risks - legal, ethical, reputational...

4. Anticipating Costs - AI tools are not free

5. Change Impacts - making sure intentional change management is in place.

How AI is changing search, Yoast wrap up from 2025: https://yoast.com/seo-in-2025-wrap-up/ 

AI Literacy Measures and Suggestions from US Department of Labor: https://www.dol.gov/sites/dolgov/files/ETA/advisories/TEN/2025/TEN%2007-25/TEN%2007-25%20%28complete%20document%29.pdf

AI Literacy Measures: 

1. Understand AI Concepts

2. Explore AI Uses

3. Direct AI Effectively

4. Evaluate AI Outputs

5. Use AI Responsibly

Delivery Principles for AI Literacy Growth

1. Enable Experiential Learning

2. Embed Learning in Context

3. Build Complementary Human Skills 

4. Address Prerequisites to AI Literacy

5. Create Pathways for AI Learning

6. Prepare Enabling Roles

7. Design for Agility

Webinar: How to Use AI Tools Safely at Your Nonprofit with Matthew Eshleman. https://communityit.com/webinar-how-to-use-ai-tools-safely-at-nonprofits/


_______________________________
Start a conversation :)

Thanks for listening.


Carolyn Woodard

Hello and welcome to the Community IT Innovators Technology Topics podcast for the midweek on Tuesdays AI Check-in for Nonprofits. And my name is Carolyn Woodard. I'm your host. And my usual disclaimer that I'm not an AI expert. None of us are. We're all just uh going along together and learning together to try and get smarter together.

Carolyn Woodard

So I wanted to share a couple of things with you this week. I was looking at an a little bit older, it was from January, a Gallup poll about AI workers, and a couple of things struck me. And I'll share the link with you in the show notes. This Gallup poll showed that most people are not worried about using losing their jobs to AI, which was very interesting to me because they also said in this article, the workers that are most likely to use lose their job to AI are clerical and administrative, lower tech savvy, older workers. And they pointed out that those people are going to have a lot of trouble finding a new similar job when AI has made us a lot more efficient at managing our own schedules or doing our own everything that we do in an office without clerical help. So just a thought out to those people that may see those impacts.

Carolyn Woodard

K through 12 education was at 56%. The overall rate of people who are using AI at work was at 46%. This was set a couple months ago, so I imagine it's moved since then. But then you get into more nonprofit areas, community and social services, government or public policy, and healthcare were all in the 40s along with manufacturing. And then retail was the lowest at 33%. So I'm not entirely sure what that means, but I suppose that people who are working in a store are not seeing their in-person job replaced by AI or needing to use AI very much. But I can guarantee you the people at the head office of those corporations are using a lot of AI. So I just thought that was interesting. It was an AP article about this Gallup poll came out just about a month ago in January. So I'll share that in the show notes.

Carolyn Woodard

I had a couple other interesting uh resources I want to share with you. One is an article and a framework from a friend of the company, Community IT. Build Consulting was founded by our founder, is one of the partners. They do IT consulting, so software selection, partial CIOs, part-time CIOs, that sort of thing, CRM management, database management, all kinds of consulting. And they, of course, are really thinking about and interacting with clients around AI implementation. So Kyle Haynes wrote this article and framework on AI implementation that I thought was just interesting. So I'm going to share some of the points of this framework with you, and I'll share the link so you can check it out yourself.

Carolyn Woodard

He starts out by pointing out that there are competing realities in nonprofit work with and how AI is impacting us. One is that organizations that don't explore AI may be quickly left behind. I've already seen a lot of foundations announcing grants to improve AI or use AI to realize your mission. And I, whenever I see many of them are just so, I just want to kind of shake the shake the computer and say, you know, some of these organizations that you fund have difficulty just managing their IT. I don't see how a lot of those places are going to be able to add a huge AI project on top of their IT if the IT is not well managed already. It just stri, it's it speaks to, you know, just management capacity.

Carolyn Woodard

And a lot of times AI is seen, it's been billed as something that is just easy to implement. But if you've worked with it at all, you know that there's a lot of upfront time of understanding the tool, playing around with it, putting the time aside to really use your human brain to think through what you want the AI to do, and then really working with the AI. And it's really not a set it and forget it situations. So there are organizations that feel and may be left behind if they don't explore AI, particularly for those productivity gains.

Carolyn Woodard

But then there's also the tension with organizations feel that if they move too quickly are too on the bleeding edge of a new technology without doing planning, without doing risk assessment, risk a lot of disappointment, staff fatigue, AI projects that don't work out, they're not adopted the same as like your big fancy CRM and you only have half of your people using it. So AI tools are very similar to that, and they can be really distracting because they have all that upfront time. So you have to put in the time to prioritize them. And uh if you haven't, like I said, we said a couple of times, if your data isn't in good shape, if your permission structure isn't in good shape, if you've been kind of hobbling along with this IT structure that's kind of uh held together with some baling twine and bubblegum, uh, AI is not gonna be a smooth implementation just slathered on over the top of it like buttercream frosting or something. You're still gonna have that um problematic structure underneath.

Carolyn Woodard

So in his framework, Kyle talks about five measures that you and your leadership uh should be taking into account. And I really loved his first one, which is the return on investment. So, what are you trying to do with AI? And make sure that you have that really firmly in mind. Uh, is it gonna improve your mission? Is it gonna improve constituent experience, your donors, your constituents, the people that you work with, your community? Um, is it gonna improve your financial situation? All of those issues should not be like a second thought. That should be right in the front when you're thinking about an AI tool. And if you don't, if you map it out and you don't see a return on investment, then of course, like any other tool, you're gonna think twice about putting in that prioritization, staff time, financial, you know, investment in an AI tool if you're not sure it's gonna have any return on investment. So I like that he puts that at the front, and I would urge you to also think about that first of all. Like, what are you trying to do and do you see a return on investment from investing in that tool?

Carolyn Woodard

His second measure dimension to keep in mind as you're thinking about AI implementation is the technical and data feasibility, which I just mentioned. If your data isn't ready, if it isn't clean, if it's not structured clearly, uh if it uh does if you don't have the permissions assigned correctly, um you're gonna have trouble. If you don't have the tools and infrastructure already in place. So AI is not a magic bean or just an on-off switch. You can just switch it on and it'll work perfectly. It really is gonna work with your data. So if your data is a mess, the AI is gonna have trouble and you're gonna spend a lot of time cleaning it up.

Carolyn Woodard

And then stay staff capability. And I'm gonna talk a little bit more about that with another resource I'm gonna give you. But does your organization have the skills? The um, do you need to upskill? Do you have the ability to prioritize? What is your culture like? Are you tech savvy and tech forward? Are you excited about this? Are you scared of this? Do you have people who are really very reluctant to use AI at all? All of those things, none of those are bad or good. It's just something to like know thyself. So know, under understand your staff capacity and take that into account when you're thinking about an AI project. Because yeah, that staff capability, the data readiness, all of that is gonna slow down or mess up your project. And then the AI is gonna look like it's not doing anything, it's not successful, and you're gonna lose trust, you're gonna lose momentum. So, really thinking those things through ahead of time and experimenting, maybe doing pilot projects again is a good idea to just take one piece of your database and see if that is set up adequately and then learn from that and move on.

Carolyn Woodard

Uh, measure three, using pilot pilot, pilot projects, is a good way to mitigate the risk from AI. So there's lots of just technical risks. As I said, you're gonna lose enthusiasm, you're gonna lose uh trust of your staff if an AI implementation goes really awry and takes a lot more time than people feel like they're getting out of it. There's other risks we've talked about previously on this podcast. There's ethical risks, so you need to know that you have guardrails in place to be sure that the AI is reflecting your values. You might need to have a person who is in charge of thinking about fairness, bias, governance, accuracy, uh, data privacy and security, the regulations you're required to comply with, uh, legal and policy implications for your organization around transparency, again, your values, your privacy policies, your intellectual property policies, um, and then again the operational reliability and accuracy.

Carolyn Woodard

And uh we talked, I've talked previously with um on the pod about reputational risk. So if something is public, you want to have a lot of people looking at it if it's created with AI before it goes on your public page or is communicated to your constituents. If something is private and maybe contained to one team and is not gonna be out there uh publicly, then there's less reputational risk if something is inaccurate, although it may bubble through and uh impact something that's on your website because the original numbers were just made up by AI and don't actually reflect what you're doing or um what you're reporting on. So making sure that you have those governance policies in place to mitigate the risks of AI is uh Kyle's measure three in his framework.

Carolyn Woodard

Measure four is anticipating costs. So AI is often perceived as low cost. For one thing, a lot of our tools are just upgrading, and then suddenly you have an AI companion, you have co-pilot, you have Gemini, you're using MailChimp or any other tools that are just suddenly you have these AI options that don't quote unquote cost anymore. So uh a lot of organizations, you know, like you're like your eyes light up. You're like, oh, this is gonna really help me, and it's a low cost. So just thinking through all of the costs. So maybe low cost because you already have a license and you're just getting it, quote unquote, for free, but understanding those initial costs, if you do have to buy a license to it to get the enterprise version, which we highly, highly recommend.

Carolyn Woodard

If you need to give your staff time to prioritize on it, you need to do learning to upskill your staff on it, you need to clean up that data, do the data preparation and configuration, all of those issues. You need to talk to your lawyers. Whatever it is, they're gonna be upfront costs, initial costs, there's gonna be ongoing costs, that time to prioritize. It's changing so quickly. You're gonna need to put time into upskilling, learning, uh, professional development. And then there's always indirect costs as well.

Carolyn Woodard

So thinking all of those costs through, putting them into your budget for the tool, the project that you're doing with AI, um, and understanding that you know, your organization may not be in a place to be doing an enterprise-wide, organization-wide AI implementation, but you have people in all of your teams who are using these tools and trying them out, and they're quote unquote spending their time. Sometimes they may be spending budget. Um, so those are all costs that you want to take into account. And often maybe it makes sense to do an enterprise-wide project instead of having all of these small projects that are going off in lots of different directions that are costing your organization money, but they might be more hidden because it's staff time or it's a team budget or it's an individual budget. Um, so you want to make sure you're thinking about multi-year financial planning, setting realistic expectations, and really thinking through those costs and taking into account staff time as a cost.

Carolyn Woodard

And then the fifth measure that he has in his framework is the change impacts. So Build is all about change management. We did a webinar with them uh last year on change management. Uh they have tons of change management resources on their website as well, if change management is something that you want to learn more about or you don't want to check in on are you taking everything into account? They have a lot of tools and templates that you can download there.

Carolyn Woodard

Um, but you know, AI tools, a huge change. And again, deceptive. There's these tools are being marketed to us as cost-free and very easy entry. You just talk to it and it makes the makes a little agent. It does the creates a draft for you. It, you know, it's so helpful. It's like, do you want me to create this other thing that's related to what I just created for you? So it's always asking these add-on questions and trying to keep you engaged. And that is really deceptive because I think, as we all know, it's a huge change to how we're gonna work. And I'm gonna talk a little bit more about that in another dimension in just a moment.

Carolyn Woodard

But just to keep in mind that your processes are gonna change, AI is gonna change every position of every staff member at your nonprofit. You're gonna need support to adopt these tools. You're going to need training, professional development, upskilling. You're gonna, it's gonna have a cultural impact. It's gonna change who you hire, what skills they have, what skills you need on your team. It's gonna change role definitions of who does what and how they do it. So all of these changes are really can freak people out. Uh, it's very can make uh your culture very anxious, uh, let alone what AI is doing to maybe the community that you're working with, the mission that you're accomplishing.

Carolyn Woodard

So taking all of those things into account, having your leadership team and your staff managers and your staff really thinking through the change that's coming, that's happening, that's already happened. How do they feel about it? What do they need to be successful and trying to manage that change in a responsible, intentional way so that you bring all of your staff along with you and they are all you know adopting these tools that you want them to be using and finding the pitfalls and finding these like great new opportunities.

Carolyn Woodard

That's the other exciting thing about nonprofits in AI, I think, is that compared to for-profit organizations that I have worked at, companies, I think nonprofits have a lot of knowledge throughout the staff. Junior staff have great ideas, leadership have a lot of management experience, and people at nonprofits understand their mission really well. So these types of tools can really give people throughout your staff an opportunity to think of a new way to do it, a more efficient way to do it, an energizing, exciting opportunity way to use AI to make something work better or reach new constituents or do your work in a different way that could be even more impactful than what you're doing now.

Carolyn Woodard

So I really love this article with these five uh measures to you do your AI framework planning and create that roadmap. So I will share that um link with you in the show notes. It's from buildconsulting.com.

Carolyn Woodard

And then I wanted to talk a little bit about uh ways that this is maybe more specific for marketing people and uh websites, but you know, leadership. If you're in leadership at your nonprofit, your website is this huge communication tool. It's how people find out about you, it's how you communicate what you're doing to the world, to your funders, to your constituents. And um, change is coming to search. That's the only way I can say it is Google is changing, it's gonna change almost completely, if not this year. I I expect it to have changed completely this year, um, but definitely by next year. And you've probably already seen it. When you search for something, if you Google it, if you use other search tools like Bing is coming back, Microsoft's Bing, um, DuckDuckGo, other search engines, um, you are seeing that AI suggestions at the top of the fold. And what that means is that an AI tool has gone out to that website, picked up the information that it thinks is reliable and reputable, and brought it back and displayed it to you without any, without you actually going to that website, clicking through a link on Google, clicking on a link on the menu on that website, finding the article, finding the information, judging for yourself if you think that site looks like it's reputable or maybe kind of shady. Um,

Carolyn Woodard

and that's just really gonna change how we communicate, um, how people search for the answer to their question. So if you are on a website team, a communications team and a nonprofit, you're probably already well, well aware of this, but just letting everyone know, think about it because the your website is gonna need to be visible to all of those AI search tools, chatbots, AI search, and that is just changing day to day. Like it's changing so rapidly. Um, so you know, talk about that with your comms team, with the website team, and just keep that in mind. It's another impact on us of AI.

Carolyn Woodard

And the last thing I wanted to share with you today is an article from the Department of Labor with an AI literacy framework. So this is laying out the ways that the job descriptions and job requirements are going to change, and how to assess AI literacy and hire for AI literate, like write it into your job uh descriptions and how to upskill the people that you have, as I mentioned, the administrative women, older women in those positions, and making sure that they're up to speed on these tools and how they're gonna impact them. And this article, I'm gonna share the link with you. It lays it out very clearly. I'm gonna read through a couple of things, the foundational content areas of AI literacy.

Carolyn Woodard

The first area they identify is understanding AI principles, developing a clear grasp of what artificial intelligence is and how it works. There are lots of resources out there. You can go back to the introduction to this podcast series on nonprofit AI, but just understanding the concepts and then understanding how those AI tools are working lets you use them better, prompt them better, understand why they're returning what they're returning and how to get a better outcome.

Carolyn Woodard

The second area that they talk about with AI literacy is being able to explore AI uses. Understanding how AI is being used across real-world workplace settings in those different categories that I talked about. If you're a tech worker, if you work in a school, if you work in a retail store, if you work for the government, but just understanding practical applications of how AI tools support tasks, can help with decision making, can streamline your workflow, all of those issues. It's very variable between different industries. It's going to be very variable between different nonprofit, you know, topic areas that you're working in. But just that is another layer of AI literacy is understanding how the tools work, productivity tools, information support, creative, generative AI, task specific applications. Agents, decision support systems, all of that.

Carolyn Woodard

Third area of AI literacy is being able to understand how to interact with AI systems in ways that produce useful and relevant results, which is what they call directing AI effectively. So you can see these different layers of literacy kind of move up the understanding chain, I guess, hierarchy. So being able to create AI tools that help you reach an intended audience, convey an intended tone, reach specific goals, your prompting techniques, being getting better at prompting, being able to iterate on the outputs. If the output isn't correct, not giving up, going back and asking more questions and getting the AI tool to update what it did. Knowing when to give up, maybe also is a skill that you need to have or when to start over.

Carolyn Woodard

The fourth area of AI literacy is being able to evaluate AI outputs. I think this is something that schools are working a lot on right now is assessing the quality and usefulness of the AI-generated output. And as I've said many times, making sure a human is always looking at it. A human has to be the last editor. That's something that we really, really are stressing.

Carolyn Woodard

And then the fifth area of AI literacy is using AI responsibly and not going against those terms and conditions, not using AI to create something false or misleading, not using AI on someone's image without their consent, all of those issues. Acknowledging AI when AI was used, protecting sensitive information, all those personally identifying information, and with AI, that personally identifying and information bucket is getting larger and larger because you can figure out who someone is by some data about them that isn't even what we would have considered personally identifying information. Following all of your workplace rules and policies, avoiding misuse and harm, managing context-specific risks, and maintaining accountability. So not blaming the AI if you made a mistake, understanding that the workers remain responsible for decisions and outputs they produce with AI.

Carolyn Woodard

And then they do also have a second section, which I won't go into too much, but it's really, I think, helpful to look through, which is delivery principles of AI literacy. So not just for students and for people coming into the workplace, but for your existing staff. So they have some suggestions here and some ideas on how examples on how to implement these.

Carolyn Woodard

The first is to enable experiential learning. It's from the beginning, people have said, oh, just download it and play around with it. That's how you learn it, which a lot of people are able to do that. It doesn't work for a lot of people's learning styles. So you do need to provide direct hands-on use. You could, you know, provide people within the company organization that are kind of mentors or coaches, but just recognizing that people have different learning styles and uh being able to create learning environments for your staff.

Carolyn Woodard

Embedding the learning in context. You don't want to spend people to spend a lot of time learning just generic uh ways to prompt, like making sure that people are using your AI tools in ways that are specific to your nonprofit is just makes sense.

Carolyn Woodard

Building complementary human skills, so that accountability I just talked about, making sure that you're teaching your staff that they can't just blame the AI for something that they made a mistake, to have that critical thinking, to be looking at the outputs critically, to be using your values, the values of your company, human values, human rights values as you are building skills.

Carolyn Woodard

They have a couple more here, addressing prerequisites for AI literacy. So having foundational tools and access needed to engage with training, no matter what sector you're in, and then creating pathways for continued learning. A lot of places those professional developments are not easily accessible or not really promoted. So it may be time for your leadership to sit down and really talk about how you're gonna all upskill and how much time you need for it and what kinds of classes or coaching or just time to prioritize it you as an organization are gonna need.

Carolyn Woodard

Preparing enabling roles so AI literacy efforts are more successful when people supporting the workers, such as managers, trainers, mentors, or counselors, are equipped with the right knowledge on their end. So your HR department, your managers, making sure you're putting in place these upskilling opportunities and

Carolyn Woodard

designing for agility, uh, which you know it's just changing a mile a minute. So we have not had tools that change this fast before. If you think about moving from, you know, secretaries taking shorthand to using a typewriter, to using uh a word processor, um, you know, using a computer to take notes or, you know, handle someone's schedule, the the typewriter didn't change. Like there was a business typewriter, and you just had to get people, you know, moving into being able to use it, but that typewriter wasn't different the next week. So now we're in this situation where the AI tools are just changing so rapidly and gaining abilities, capacity, able to save us more time, do more things for us, especially with agents. So uh understanding that you may put a learning project in place, you may be working on AI literacy, and uh you need to keep revising it, maybe weekly, monthly, every time you meet as a staff, every time you meet as management or leadership, just thinking about and talking about how you are pulling everyone along with you with these new tools.

Carolyn Woodard

So that's all I have for you today. It was a little bit of a long one that there were a bunch of resources there. I hope they're useful to you. And uh you can hear me again on Friday with our regularly scheduled podcast, uh, which will be the first part of the two-part podcast on our webinar, which is tomorrow. You can join us for that. It's a free webinar on using AI tools safely at your nonprofit. Our cybersecurity expert, Matt Eschelman, is going to talk a little bit more about how the enterprise licenses work, how to know if you're logging on correctly to that enterprise license, if you're going out to a public AI tool, and how why it's so important and how to do it safely. So it's gonna be taking your questions. It's already uh very uh has a lot of people signed up for it. Uh, it's very popular. So join us. If you can't make it tomorrow afternoon at 3 p.m. Eastern, you can go ahead and sign up, ask your questions, and then you'll get an email with the link to the recording, the transcript, the podcast is coming out on Friday. So there's lots of ways to access it if you can't come in person. But I hope you'll join us then and you'll hear me on Friday. Take care.