Community IT Innovators Nonprofit Technology Topics
Community IT offers free webinars monthly to promote learning within our nonprofit technology community. Our podcast is appropriate for a varied level of technology expertise. Community IT is vendor-agnostic and our webinars cover a range of topics and discussions. Something on your mind you don’t see covered here? Contact us to suggest a topic! http://www.communityit.com
Community IT Innovators Nonprofit Technology Topics
Nonprofit IT Roundtable pt 1 with Senior Staff
Panel Discussion with Matt Eshleman, Steve Longenecker, Jennifer Huftalen, and Carolyn Woodard
Our experts answered your questions about where nonprofit tech is going next.
In part 1, Community IT senior staff discuss nonprofits and AI, and updated cybersecurity trends to be aware of. In part 2, they discuss updates to Microsoft and Google Workspace, and take audience Q&A.
AI, Cybersecurity, Google Workspace v Microsoft Office, Gemini v Copilot or ChatGPT or another generative AI tool, AI agents, AI FOMO, data data data, safety and security of your staff, budgeting for and maintaining basic IT, not to mention fancy IT … anything else you want to know about?
We don’t have a crystal ball but we do know our way around nonprofit IT.
We’ll look back at the trends of 2025 and what we got right last January, and we’ll look ahead to make predictions for 2026.
The nonprofit tech roundtable is always one of our most popular webinars every year. As with all our webinars, this presentation is appropriate for an audience of varied IT experience. Community IT is proudly vendor-agnostic, and our webinars cover a range of topics and discussions. Webinars are never a sales pitch, always a way to share our knowledge with our community.
_______________________________
Start a conversation :)
- Register to attend a webinar in real time, and find all past transcripts at https://communityit.com/webinars/
- email Carolyn at cwoodard@communityit.com
- on LinkedIn
Thanks for listening.
Thank you for joining Community IT for this podcast, part one. Subscribe wherever you listen to podcasts and leave us a rating to help others find this leadership resource for nonprofits. Listen for part two in your podcast feed.
Carolyn Woodard:Thank you everyone for joining us. Um, we'll keep letting people come into the room, but I'm gonna go ahead and get started because we have so much to cover today, and I want to say in advance that we are not gonna be able to cover everything. Welcome everyone to the Community IT Innovators webinar, the Nonprofit Tech Roundtable. It's one of our most popular webinars every year. We have a panel of our senior staff here today to talk about essential trends in nonprofit tech and what that means for nonprofits.
Carolyn Woodard:My name is Carolyn Woodard. I'm the outreach director for Community IT. I'm the moderator today. Very happy to hear from our experts, but first I want to go over our learning objectives. Um, we're gonna expect uh discuss several themes today with our experts. Um we know nonprofits are always facing challenges. Sometimes feel more challenging and stressful than others. Um, there are a lot of other venues for talking about politics at the moment, funding issues, policy changes. Today we want to acknowledge all of those challenges that are going on, and taking that into account, we're gonna move right into talking about IT and whether we can help you weather some of these storms with well-managed IT.
Carolyn Woodard:So, today our learning objectives are to talk about what do you need to know about AI as these tools become more integrated into our workplaces, how are they impacting nonprofits, and how what are the best practices around using AI at nonprofits? We're gonna learn some about new cybersecurity attacks and prevention. We're gonna learn about Google Workspace and Office 365 updates, what to look for in 2026. I'm happy to welcome Jennifer Hoftalin to her first January roundtable. I'm looking forward to hearing her weigh in with uh her view from hundreds of clients that she works with and prospective clients on what they're worried about in 2026 and what trends she's seeing. So now I'd like to let my colleagues around our round table introduce themselves. So, Matt, would you like to go first?
Matthew Eshleman:Great. Oh, thank you. It's good to be here. I'm not sure how many of these um I have done, but I always enjoy the conversation, um, talking about kind of what to expect here in 2026. Um, I actually joined, this is my anniversary month at Community IT. So I joined as a full-time um back in 2002. Um, so 20, 24 years or so full-time um here at Community IT and seeing uh definitely lots of technology uh change and evolve and grow uh over my career here, and looking forward to the conversation today. Um so I'll turn it over to Steve to introduce himself.
Steve Longenecker:Sure. I'm Steve Longenecker. I'm the director of IT Consulting at Community IT. I also have been part of this particular uh webinar for many years now, and it is a fun one. Um, I guess uh I will use uh pick Matt's cue and say that I've been at community IT for a long time as well, and give it a chance to brag on my son. He is graduating from Emory College, Emory University this spring, and I remember delaying uh my start data community IT uh because uh his birth was imminent, and I would like to have a few weeks at home when he was a newborn. And so I started about a month after he was born, and now he's graduating from college. Yay! Unbelievable.
Carolyn Woodard:That's amazing. Congratulations, Steve, and congratulations to your son.
Steve Longenecker:Thank you. Uh Jenny, do you want to go next?
Jennifer Huftalen:Yeah, sure thing. Thanks, Steve. So I'm Jennifer or Jenny Huftalen. I'm the director of client services at Community IT. And I've been with the organization for 18 years. So kind of a first time, long time here. Um, I've listened to a lot of these webinars. I've enjoyed um the benefit of uh, you know, working with my esteemed colleagues here um over the years. And um, yeah, I've had a lot of conversations with hundreds of organizations, both uh current clients and prospective clients. And um, yeah, really excited to be a part of this discussion today.
Carolyn Woodard:And before we begin, um, if you're not familiar with Community IT, I'm gonna tell you a little bit about us. We are a 100% employee-owned managed services provider. We uh provide outsourced IT support exclusively to nonprofit organizations. Our mission is to help nonprofits accomplish their missions through the effective use of technology. We are big fans of what well-managed IT can do for nonprofits. We serve nonprofits across the United States. We've been doing this for to this year, it is our 25th anniversary as a company. We are technology experts. We're consistently given the MSP 501 recognition for being a top MSP, which is an honor we received again in 2025. And we believe we're the only MSP on the list serving nonprofits exclusively.
Carolyn Woodard:I want to remind everyone for these presentations, Community IT is vendor agnostic. We only make recommendations to our clients only based on their specific business needs. We never try to get a client into a product because we get some kind of incentive or benefit from that. But we do consider ourselves a best of breed provider. So it's our job to know the landscape, what tools are available, reputable, and widely used, and we make recommendations on that basis for our clients based on their business needs, their priorities, and their budget.
Carolyn Woodard:So I'm gonna start with a poll. We like to do this. And this poll is do you have AI policies at your nonprofit? So the option one is I don't think so. Question mark, not sure. Uh number two is we're in the process of creating policies. Number three is yes, our organization has created an AI acceptable use policy, and our staff understand our policy. And the fourth answer is not available. I'm gonna put a link in the chat to our AI Acceptable Use Policy template, which you can download from our site. You can also just Google it. You can use AI to help uh generate a policy. Um the question is do you have AI policies at your nonprofit? It looks like we have a great response. Um, I'm gonna go ahead and end the poll, share the results with you. And Steve, can you see that?
Steve Longenecker:I can see it, yeah.
Carolyn Woodard:So would you tell us about the results?
Steve Longenecker:Sure. So we had 48 responses. Um 19% said I don't think so. Question mark. Um, and then about half, uh 22 out of 48 said we are in the process of creating policies. And then um 21% said that they have an acceptable AI, an AI acceptable use policy, and that their staff use it. And then um seven people out of the 48 said it didn't apply to them.
Carolyn Woodard:Great. And I would say that that pretty much tracks, I think, with what we're seeing. I mean, Jenny, do you want to weigh in on that? That um I think there are many nonprofits that are aware that they need to have AI policies, but it's a journey getting there.
Jennifer Huftalen:Yeah, I think that's right. I think um the majority of conversations that I've been having um, you know, are are just aware that it exists, they should look into it. They know some people are using some tools. They're maybe um, you know, wanting to uh uh find the time to make that an initiative, but maybe not sure where to start. So I think that's kind of the majority of the folks that I'm talking to are um kind of stressed about it in some ways because they know they should have some sense of what's going on with it. And and you know, again, not always sure sure where to start.
Jennifer Huftalen:Um but uh for the most part, I can't say that I've had many conversations with organizations that um have it figured out. So uh that's um that that may be some uh peace of mind for some of you who are in that mode of of just I don't think so, or we're we're kind of in the process. Um, you know, I think uh nobody has really sort of um uh has it laid out perfectly yet. Um, but now's the time, I think, uh in general to start thinking about that.
Carolyn Woodard:And I think I'll jump in and just say um, even if you have like a one pager that seems pretty vague and not maybe really a policy, that's better than not having talked about it at all. So I would say don't let the perfect be the enemy of the good in this as if you're having those conversations, if you're elevating it up to your board level, um, and you're putting together like, well, these are our principles maybe instead of a policy, that itself is very helpful too.
Carolyn Woodard:So we want to move on to talk about AI and nonprofits. Um, so these are some talking points that we are coming up consistently at community IT with the clients that we're working with, with the conferences that we're going to, with everything that we're reading about AI. Um, so I wanted to go over them a little bit quickly and then start out a discussion with our experts here.
Carolyn Woodard:Um, so the number one thing we're saying and hearing and hoping that you are taking away is to match the tools to your needs instead of finding a tool that sounds really good and matching your needs to that. So start with your needs. What do you need to have automated? What could is a repetitive task that could be done better by a non-human, um, and then find a tool that fits the need that you have.
Carolyn Woodard:Um, beware of public tools. Um, that is something that we're hearing over and over, but um, you know, trying to use enterprise enterprise AI tools instead of public ones.
Carolyn Woodard:Um, having a human as the last editor, like any assistant, you wouldn't just like take their work and put it out there on your website publicly without having ever checked it over. So make sure in your policies that a human is the last editor on what you're what you do with AI.
Carolyn Woodard:Have a policy, even as I said, a one-page philosophy is better than nothing. Use it as a starting point for conversations. This is an iterative process. You cannot have a policy that you made six months ago still be adequate with the pace that uh AI is changing. So it should be something that you are having conversations about.
Carolyn Woodard:Um AI is adding tools to the tools you already have, so you need to be aware of that.
Carolyn Woodard:And then taking training seriously and committing to upscale yourself and your staff around AI.
Carolyn Woodard:Um, we think AI, nonprofits that are getting more familiar and comfortable with AI are going to be more effective, more productive, and um, there will probably be funders that are interested in that progress too.
Carolyn Woodard:Um, so what does this mean for you? What does it mean for your nonprofit? If you want to put in the chat, uh, if you have thoughts on this, what what are you doing with AI? What is your nonprofit doing with AI? What does it mean to you? Um, thank you, everyone who's been putting in the chat about creating a policy using our template or other templates. Um, but I want to turn it over to our experts.
Carolyn Woodard:So um, Steve, Matt, and Jenny, um, I don't know who wants to go first, but do you have thoughts on AI and nonprofits? Um, maybe Matt, we'll start with you with the cybersecurity angle.
Matthew Eshleman:Um, yeah, I mean, I I think from the kind of from the cybersecurity perspective, you know, organizations obviously are um, you know, have a primary concern about the integrity uh and confidentiality of the the data that they kind of have about the constituents they serve, about their staff, about their um board members. And so um, you know, even if you don't uh necessarily have a formal AI adoption um strategy or or or kind of don't know where to start there, I think the good news is that you can start by uh identifying and securing the data that you already have.
Matthew Eshleman:Um many of these AI tools, uh, their policies and permissions follow the users that they are kind of assigned to or the licenses are assigned to. Um and so even if you're kind of not quite sure maybe how a user would would kind of use some of these new new tools, uh you can kind of go through and identify in your organization, you know, here's where our sensitive data is. Do we have the right permission structure in place? So that whenever you know a license is assigned to um to a user, and then that AI tool you know begins to discover all the information it has access to, right? They they the this the AI tool only has access to what it what it's supposed to or what it needs to.
Matthew Eshleman:Um so again, I think you know you can kind of start there, even if you're not sure which tool to use, um, but you know, start by understanding your organization's um data better, who has access, where it lives, um, and and you can kind of get uh get that work done regardless of the AI tool that you end up um using or adopting.
Carolyn Woodard:Yeah, I feel like my entire career in nonprofits, people at nonprofits have been trying to find an easy, quick way to organize their data and their data files and share them easily and efficiently. And um AI is just not there yet. You really do need to know what the permissions are and uh like maybe AI tools can help you with organizing, but a person needs like you know your organization and your files better than anyone does. So you need to be in charge of it. I'm sorry. I'm sorry for everyone.
Matthew Eshleman:Well, I mean, and I think the other thing I would I would just highlight, you you kind of touched on it as one of the first points in terms of matching tools to needs, right? So AI is a tool, a very powerful tool that can help you do a lot. And so I think it's important for organizations to kind of understand well, what is that thing that we're asking AI to do? You know, I think there's lots of stories of like, oh, like, you know, somebody vibe coded like a whole new CRM system, you know, because you can. Um, and you know, just because you can doesn't mean you you should.
Matthew Eshleman:And so uh, you know, all this stuff is really great and it's powerful, and there's lots of great examples, but uh, you know, kind of like the boring stuff of you know, organization governance and management and kind of reporting and and interoperability, you know, still at the end of the day, um you know needs to take precedence. And so I think being clear about well, what is the problem that we are trying to solve or what is the dream that we hope to fulfill, uh, and then figuring out, okay, well, what do we need to do? Maybe we do need an AI tool, or maybe we need to like have some planning meetings to like restructure how we think about and organize files in our SharePoint environment, right?
Matthew Eshleman:So um, so AI is amazing, and I think the capabilities are are changing so fast. Uh, you know, what you decide today is gonna be totally different in three months, but you know, as an organization, you know, kind of figure out what is the problem we're trying to solve, what are our pain points, uh, is it a technology problem, or are there maybe some other things that we can do um to fix that is is a good place to start.
Carolyn Woodard:Jenny, do you have any tales from the trenches?
Jennifer Huftalen:Um yeah, uh a few where I think people have had success. Uh much of what Matt has said is really just um kind of understanding what problem you're trying to solve. You know, the AI technology may change, but the problem you're trying to solve, uh your mission uh for your organization doesn't change. So uh ensuring that you're kind of um aware of kind of where your weak points are now that um maybe AI can solve is really an organizational uh structure problem that that um you know requires all of the basics that again AI can't fix for you.
Jennifer Huftalen:So you know, having good governance, um, you know, sort of understanding uh who makes the final decisions on these things, um, and again, understanding kind of what problems you're trying to address and and and having um you know clarity on which data uh sets you're you're inputting, um, I think all make uh uh a big, big difference in how effective the tools are.
Jennifer Huftalen:Um I think I've also heard from folks who um have had success when they uh really engage with kind of key stakeholders and also um other staff members that uh you know are are maybe nervous about this change. Um you know, there it is because it's so it's so new, and there is a lot of discussion about how this is just gonna replace everybody. And um, you know, it's it can be um staff can be very fearful of of this new um this new venture that we're all on. And so I think just having um you know investment in in working directly with your staff, keeping them um informed about again those big picture items, the the the problems that you're working to address, um, and asking them for feedback. You know, what what problems do you face in your day that if a tool could fix, you know, um you you'd love to have that tool. So again, you get a sense of like where people's pain points are and you can um kind of uh help them enhance, you know, um you know, use the tools to enhance the work that they're doing, not kind of again uh replace the work that they're doing.
Jennifer Huftalen:And um, yeah, again, I think that all of that means the things that we mentioned here, which is um, you know, you need to have a human involved, don't lose sight of that piece. Um, make sure you're careful about what uh information you're giving these systems, um, and that there's accountability for um, you know, the the you know, the what how people are using it and what goals you're trying to achieve uh through this through the tool.
Steve Longenecker:I'll just riff, if you don't mind.
Carolyn Woodard:Go for it.
Steve Longenecker:Um I do want to I sort of want to piggyback on what Jenny said about uh using um using your own staff as resources on this. I think it's like it's so new that the adoption is not and it's changing so fast. So it's not just new, like it's new this month is newer than what was out in October. Like it's that kind of new. So there's not, it's not like you can just like sign up for an AI training program.
Steve Longenecker:I really think that the way for this to be adopted efficiently is to, you know, have discussion groups and and and need, you know, a uh pilot user groups and people sharing with each other what they do and how they're uh making AI uh both effective and and safe and fitting within your your organization's um governance and and requirements.
Steve Longenecker:And the other thing I wanted to react to um in the chat, I um saw a chat just a f just a little few minutes ago from Sarah saying that the uh that their attorneys at their organization uh use AI quite a bit. I live in Washington, DC. I go out for beers with friends, they're all attorneys. Like that's just like everybody, you know, you can't uh swing a dead cat without hitting an attorney. Um is that the expression? Anyway, there's lots of attorneys. Sorry, cat lovers. Um lots of attorneys around, and they all swear that oh my gosh, it's like it it you know, it really is gonna cause problems for young attorneys. Who are trying who get trained in the past by being given these assignments where they're asked to like summarize all the case law pertinent to a particular case. And then now a lead attorney can just ask AI to do that, and it's like done better, apparently. I'm trusting my friends on this. But that's kind of amazing.
Steve Longenecker:But the other what I wanted to flag in Sarah's note was she said we currently don't allow AI other than for the attorneys. And that is probably that could be a reasonable decision for the organization.
Steve Longenecker:My flag on that is that I think AI is probably like the worst of the worst when it comes to shadow IT. I mean, people, you cannot keep this in a box. And so let's just like if your HR person is trying to like find an intern for the summer or five interns for the summer, and they they just got a hundred resumes, like it is gonna be really tempting to just upload a hundred resumes into the public AI tool and say, tell me which are the 10 best that I actually need to read. That's gonna be an amazing time saver for them. And it might actually be accurate. I mean, there's all these questions about AI and it, you know, whether or not it discriminates, you know, based on, you know, cues that are that maybe a more socially conscious human being would be aware of and the AI is not aware of. So that's one concern I have about that.
Steve Longenecker:But my bigger concern about that is they just used a public tool and uploaded a hundred people's resumes into it. And what just happened to that, those people's data. So I strongly urge the organizations that are trying to curb AI to really bang that drum hard and also consider providing paid for AI tools that you do trust sooner rather than later, even though it's an investment and it's a cost and you don't know how well it's gonna get used. But if people are, you you know, if people are using AI, sorry, I'm running on about this, but I do think it's that it is not, it is going to happen. People are going to be using AI, and stopping them is gonna be very difficult.
Carolyn Woodard:Yeah. Yeah, I've told a story before, but I heard someone say, oh, our AI policy is just no one's allowed to use it. It's like, well, A, like we can probably tell you if you looked at the logs that a lot of people are going out to Chat GPT and you're you know just not able to keep that genie in the bottle, as you said. And then B, you know, every kid coming out of college is gonna be using AI for a lot of their work. And so you're really kind of cutting yourself off if you're just making such a draconian policy.
Carolyn Woodard:But I think, as you have all said, like having this ongoing conversation as it evolves, as it changes, uh, is really important to have that at the highest levels and throughout the staff.
Steve Longenecker:Uh Dorothy, yeah, I would say Microsoft Copilot and Gemini are both good paid-for tools.
Steve Longenecker:And Miriam, uh, yes, I would say any free tool, if it's free, you really have to question what the business model is because if you're not, if they're not making money off of your subscriptions, they're making money off of you somehow, um, generally speaking. And so I would say any free tool would be um you'd you want to be careful about that for sure.
Steve Longenecker:Matt, do you have a do you want to add to that? Um you're the expert in terms of that.
Carolyn Woodard:Well, I'm actually gonna turn it over to Matt for our next segment on cybersecurity. So I think Matt, if you want to weigh in, uh please go ahead.
Matthew Eshleman:Yeah. So I again I think kind of uh bridging that connection there. Um, you know, I think the good news is for you know, for you for anyone who it has a Microsoft um 365 subscription, right? If you go to copilot.microsoft.com, you sign in with your entra-ID credentials, even if you've not paid for Copilot licensing, you're kind of immediately getting a kind of free protected version um of that of that Chat GPT uh model. So again, the the data that you put into it, your your um you know, your prompts, the data that comes back, right, is protected um by Microsoft's data governance and kind of does not go back in to feed their um to feed their model. So again, that that's a good you know uh you know entree into using some of these AI tools, which are, you know, hey, let's use this, not that.
Matthew Eshleman:Uh again, I you know would echo what Steve said and Carolyn said, right? It is a hard prohibition, it is really hard to um to really enforce unless you've taken a whole bunch of other steps, um, which we are starting to see some organizations do to say, we are here are five authorized AI tools, and everything else we're gonna block. And there are technical ways to do that.
Matthew Eshleman:So um again, uh you know, so you can use some of the commercial tools for free, right? If you already have subscriptions and kind of leverage those existing investments before you, you know, kind of maybe evaluate and figure out what other tools might uh help you um solve your business, your business case.
Matthew Eshleman:Um, you know, on the cybersecurity side of things, um I think what we are seeing is that the the bad guys are using these tools um for improving their work and writing more effective emails that are more engaging that will get you to click on things, just as maybe the development department is. Uh and so we are seeing, you know, kind of a you know steady increase in sophistication for um uh those targeted, uh targeted emails, right? A lot of the things that we would we would have kind of relied on in the past to say, oh, if the email is misspelled, if it doesn't use proper grammar, uh, you know, those visual, those um, you know, kind of narrative clues have basically been you know dissolved because now you know the hackers can just go to the free ChatGPT and drop in their prompt and and have a well-crafted email.
Matthew Eshleman:Um so again, uh we see that uh you know kind of on the on the phishing side, you know, uh so you know the emails are more sophisticated, the the the copy that they're writing is is better, uh, and and so that is one way that we are seeing AI being used um uh to to kind of aid in the attacker's speed.
Matthew Eshleman:Um I think some of the industry data that is being reported is that um, you know, fortunately the hackers are not necessarily using AI to like, in general, like have kind of new attack methods. They're just kind of doing what they have been doing, but doing it better and faster. Um so you know, I think that you know is kind of the current state. Again, that may change as these models become you know more sophisticated, right?
Matthew Eshleman:And so the conversation that we're having now about the state of AI uh is gonna be very different, even three months from now as the tools continue to evolve and gain uh autonomy and capability.
Matthew Eshleman:Um on the specifics around cybersecurity, uh, you know, so obviously we're wrapping up 2025. Uh, and you know, I just took a took a first peek at the security incident data that we saw uh amongst uh from our 200 uh clients, about 8,000 nonprofit staff that we support. Um, you know, and again, a couple takeaways, right?
Matthew Eshleman:Spam and phishing uh continues to you know increase year over year. Uh, you know, it's kind of easy, cheap, and effective. And and you know, that that is only going to continue.
Matthew Eshleman:Um there were some bright spots, uh, one being that the number of compromised accounts uh that we uh responded to did drop compared to the previous year. Um, you know, I think probably through some combination of better training, uh we continue to have more and more staff uh adopt our uh formal cybersecurity training platform. And so I think just educating staff is a great way. Uh you know, technology tools aren't going to solve our cyber problems, and so having engaged staff is important.
Matthew Eshleman:Um, you know, I think some of the things that are out of our control, uh, you know, Microsoft, uh, their digital crimes unit, in addition to uh, you know, the NGO ISAC uh you know did some legal action that resulted in takedown uh of a lot of uh threat actors kind of IT infrastructure that was involved in kind of these attacker-in-the-middle frameworks that would steal uh identities. And so uh we did see a drop um in uh kind of compromised accounts.
Matthew Eshleman:On the flip side, we actually saw an increase in the amount of virus activity uh that was detected by our endpoint security tools. So again, that's great to block. Um, but again, uh so maybe that is an example of the threat actors using some of these um AI tools to again write scripts, malicious software, uh kind of better, faster, cheaper.
Matthew Eshleman:And uh, you know, so we did see more endpoint uh activity, which is which kind of uh is in opposition to a trend that we've observed for many years, which is uh you know, just a reduction in in endpoint uh security risk over time.
Matthew Eshleman:Um we are seeing uh you know, just a note about kind of compliance requirements. Uh uh, you know, we are seeing that come from funders uh as organizations try to adopt more formal cybersecurity frameworks to kind of make sure that that organizations are dotting all the I's and crossing the T's. And so, you know, I think that that is a good thing. It may feel like that's a lot of overhead, but um I think it isn't it helps to push organizations along to adopt these tools and techniques.
Matthew Eshleman:Um, and then I would say, you know, we are updating our uh guidance around multi-factor authentication. This has been a common refrain that we've said, you know, you need MFA, you need MFA, you need MFA, which is true. Um now the updated guidance is that you need phish resistant MFA uh in response to some of these uh techniques that threat actors can use.
Matthew Eshleman:So it basically means that enabling things like Windows Hello on your Windows computer or platform SSO if you're on a Mac, uh using Fido keys, right? I have a little physical security key, right? These are fantastic. Um, pass keys and Microsoft Authenticator, right? These are you know an evolution or more secure uh way to do that multi-factor authentication that ties your user session specifically to the device that you're on, right? So that that session can't be stolen and kind of taken to a different device.
Matthew Eshleman:So um, you know, and then the final uh uh piece here is uh again kind of coming back to what is the most likely and the most common attack scenario that we see. And it's really you know, it's still as a phishing email that's targeting a user to get them to click or interact uh with a document or an attachment or something so that they can steal the user's identity and steal money, right? That is still the most common scenario that we see.
Matthew Eshleman:Um, and so you know, that's you know, kind of goes to support our guidance and recommendation to you know invest in security tools to train and educate your staff, improve your MFA methods, right, to help reduce that risk, um, and invest in technology that can kind of identify, block, alert uh when there are those suspicious logins uh that occur so that we can respond quickly uh and prevent damage uh from occurring after a user's account is compromised.
Carolyn Woodard:I'll jump in and say we have a ton of cybersecurity resources on our website, free past webinars, uh blogs, uh templates, downloads, everything like that. And then I just want to say also that uh February 25th will be a webinar with Matt coming back to talk about using AI securely, so that junction of AI and cybersecurity. And in April, we will have that annual report on the incidents that we see across the thousands of users that we have. Um, so please come back and join us for those. Thank you, Matt, for updating us on what to worry about.
Community IT Intro:Thank you for joining Community IT for this podcast, part one. Subscribe wherever you listen to podcasts and leave us a rating to help others find this leadership resource for nonprofits. Listen for part two in your podcast feed.