Community IT Innovators Nonprofit Technology Topics

Nonprofit AI: Differences Between Public and Enterprise Tools

Community IT Innovators Season 7 Episode 10

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 17:41

To follow on from our recent discussions regarding the rapid adoption of artificial intelligence in the nonprofit sector, this episode explores the critical technical and privacy distinctions between public and enterprise AI tools. 

The CISA Incident and the AI Privacy Gap

Last week, news outlets including Politico reported that the interim director of the Cybersecurity and Infrastructure Security Agency (CISA), Madhu Gottumukkala, mistakenly uploaded sensitive government contracting documents into a public version of ChatGPT. This triggered automated security warnings designed to prevent the unintentional disclosure of government material.

This incident highlights that anyone can mistakenly upload sensitive data to a public tool. Even the head of CISA.

Key Differences Between Public and Enterprise AI:

  • Data Privacy: Enterprise versions (like Microsoft Copilot for 365 or Gemini for Workspace) keep your prompts and data within your organizational "cloud boundary." Your information is not used to train the underlying public models.
  • AI Search and Permissions: With Enterprise AI, the tool can surface any document a user has permission to see. This makes cleaning up your SharePoint or Google Drive permissions essential to avoid sensitive files being inadvertently surfaced via AI search. Pay attention to files that have been shared with "anyone with this link" because Copilot and Gemini will view that as granting permission to anyone searching. Finally, spend time on staff training on how to save and share files so that permissions will need less clean up going forward. 
  • Commercial Protections: Enterprise licenses include copyright indemnity that are absent in public versions.
  • Security: Enterprise licenses give IT management and administrative controls which are essential to securing your nonprofit's valuable data. 

Resources:

Trump’s acting cyber chief uploaded sensitive files into a public version of ChatGPT from Politico by John Sakellariadis, published Jan 27, 2026. https://www.politico.com/news/2026/01/27/cisa-madhu-gottumukkala-chatgpt-00749361

"The interim head of the country’s cyber defense agency uploaded sensitive contracting documents into a public version of ChatGPT last summer, ... The material included CISA contracting documents marked 'for official use only,' a government designation for information that is considered sensitive and not for public release."

Microsoft Copilot vs. ChatGPT: Data Protection Explained from Community IT.

"If you are using Copilot with a 365 subscription, your prompts and data are not used to train the underlying large language model. It keeps your data within your enterprise cloud boundary... This protection only applies when you are signed in to an eligible work or school account."

Upcoming Webinar: Verifying Your AI Security

Join Community IT CTO Matt Eshleman on February 25th to learn how to distinguish between public and enterprise accounts. Register here: How to Use AI Tools Safely at Nonprofits

_______________________________
Start a conversation :)

Thanks for listening.


Carolyn Woodard:

Hello, and welcome to the Community IT Innovators Technology Topics podcast, nonprofit AI Midweek Check-in. I'm Carolyn Woodard, your host, and today I wanted to share kind of a shocking, maybe not shocking, story about the head, the interim director of the Cybersecurity and Infrastructure Security Agency in the United States government, which is known by its acronym of CISA.

Carolyn Woodard:

And in this podcast and on our webinars, we are very careful to tell our listeners to be very careful at using public, quote unquote public tools like ChatGPT, where you go out to just a website and you are just a consumer. You have no, you might have to make an account with them, but if you look at the terms and service, uh they you're they are using your materials to train their tools. And that means they have access to everything that you upload. And, you know, for various reasons. I mean, there are some things that, you know, I would trust Chat GPT with, like if you're looking for a recipe or you're trying to find some information on restaurants or something that you don't just don't really care about it being private. Um, but certainly for anything related to your uh job at your nonprofit, we are highly, highly recommending that people use the enterprise version of the tool that they want to use.

Carolyn Woodard:

And that just means have you know pay for the enterprise license. If you're on Microsoft Office 365 or Google Workspace, you probably already have a license associated with your enterprise subscriptions. So if you don't know how to access it, you can talk to people at your nonprofit.

Carolyn Woodard:

We're going to be having the webinar in February, on February 25th. Our CTO and cybersecurity expert, Matt Eschelman, is going to go through in some detail how do you know if you're signing on to your company account and just make sure that you're doing that. Um, so but it's something that we talk a lot about. Like you may have some confusion about it.

Carolyn Woodard:

It's it can be confusing, but one would not expect the uh acting cyber chief to upload sensitive files to a public version of ChatGPT, but it came out uh last week in Politico and some other uh news outlets have covered it. That that's in fact exactly what happened. It was over last summer. Um the interim head of the cyber defense agency uploaded sensitive contracting documents into a public version of Chat GPT, triggering multiple automated security warnings that are meant to stop the theft or unintentional disclosure of government material. And uh, this was Madu Gottum ukkala, um, who is the acting director of cybersecurity and infrastructure security agency, and he had actually requested special permission to use Chat GPT. And one is just surprised that there was any kind of confusion about what he was allowed to upload to the public um ChatGPT or not. But in any case, I will share that uh link with you in the show notes. Um, this should hopefully just be a reminder that you know it can happen. It shouldn't be happening at that level, but it's definitely something to be concerned about.

Carolyn Woodard:

Um, so just to go over again, if you um Google or search for the difference between Microsoft Copilot and OpenAI's Chat GPT, they're they're essentially the same tool, but the differences uh center on your data privacy and the business integration. So you get what you pay for. Uh generally, Copilot with a 365 subscription is designed for enterprise level data security. Uh whereas, you know, Chat GPT, it's free. Like you are uh how they're making their or going to make their money eventually is by using your data and your information. Um and that's why it's quote unquote free.

Carolyn Woodard:

So if you're using Copilot, you have data protection, you can you know look into all of this for yourself and uh look into the terms and conditions agreement that you have. You can ask at your nonprofit about it. For data protection, your prompts and data are not used to train the underlying large language model for copilot. It keeps your data within your enterprise cloud boundary. It has commercial data protection limitations. So this protection of your data only applies when you're signed in to an eligible work or school account. It may have some access limitations, which is good and bad.

Carolyn Woodard:

We've talked about this a little bit. We've been talking a lot about trying to organize and clean up your data now that there is AI search available. So if you have documents that have been shared or they're in a folder that is shared, in the past, you might have what's called privacy by obscurity. So, unless you went looking for those documents, you wouldn't come across them. And who really has the time to go around and poke around in different folders that you might have access to? It was just something that was kind of difficult. So there was this kind of layer of protection that just it was hard to access some of the things that were shared in some of those folders. With AI search, you can find any document that you have permissions to. And that may not be you specifically. It may be that the person who owns that document sent the permissions to anyone with this link can find this document, in which case your Copilot is going to see that and provide you with access to it. If the terms that you're looking for are in that document, it's gonna surface that for you and um show it to you because you have act because you have you have access to it. There are ways that you can go through your files and look for files that are set that way with these open uh access.

Carolyn Woodard:

Um, but the point for this conversation is that Copilot can only access and use organizational data that the user has permission to see. So it's like good in that if you have like your HR folders set up correctly in SharePoint, uh, then the random user at your organization should not be able to find those. So Copilot can only access and use organizational data that the user already has permission to see. So that should protect sensitive documents that are not shared or that are in folders that are not shared.

Carolyn Woodard:

But again, that's something that you're going to have to undertake as a way to clean up your data, protect your data, and ensure that you have clean and protected data going forward. It's something that you're going to need to do staff training on and just continue, like, don't assume that if you've done the staff training once on how you uh save or share a document, then it'll stick. You need to be uh revisiting, you know, quarterly, probably or more frequently, um, that the AI search is turning up everything that has quote unquote open permissions. So make sure that you're um staying on top of that.

Carolyn Woodard:

Other Microsoft Copilot terms uh for the enterprise version is that deep fakes and harassment are prohibited. The terms of use prohibit creating content to impersonate people, engage in doxing, or create adult content or violence. Of course, it's a work account, right? So hopefully one is not doing those types of activities while at work. And I would say this is one of those terms and conditions where like you could do it, but then when you were caught, you would be in violation of both the Microsoft Copilot terms and probably your own user acceptable use policies at your organization. So that is a level of protection that's there.

Carolyn Woodard:

And then copyright commitment. Um, Microsoft has expanded its customer copyright commitment to cover commercial customers, promising to defend them against copyright infringement claims for using copilot outputs, but providing proper guardrails were used. So again, you're gonna want to check those terms and also check your acceptable use policies at your organization.

Carolyn Woodard:

For open AI Chat GPT, it's pretty much the opposite on everything. So by default, uh Chat GPT may use your conversations, your inputs, anything you upload to it to train their models. You can opt out of that data use, but it will limit your functionality. The ownership of the output, uh, you own the output, but you must not represent it as human-generated if it was not. So I think it's kind of squishy that you have to acknowledge it was AI, but you can't like create an artwork and say, I did this without AI. That would go against their terms of service.

Carolyn Woodard:

The terms state that you must not rely on ChatGPT for professional, legal, or medical advice. So again, one of the squishy things, like you can find that advice out there, but if you use it, don't come after ChatGPT because they've already said don't use this as a as acceptable medical or legal advice. Similar to Copilot, uh, ChatGPT prohibits using the service to generate disinformation, deceive, or harm others. And I'm gonna say right now a big ha that they will come after you for that because as we know, it is being used for those purposes as we speak. All right, so that is the difference for copilot versus chat GPT.

Carolyn Woodard:

For Google, it's a little bit more confusing because they call it Gemini in both instances, but it's very, very similar differences. So the main difference is between the terms of service for the public Gemini. So you just go out and you, you know, Google Gemini and you sign into it and you use it, you ask it questions or upload things to it for help. And the Gemini for Google workspace for the enterprise and business, um, the big differences again are data privacy, the model training, and the intellectual property protection. And they say public Gemini is for personal use, enterprise Gemini is for business use with enterprise grade protections.

Carolyn Woodard:

Again, the big differences in the terms of service include data privacy and model training. If you're using the public Gemini, Google may use your conversations, prompts, and feedbacks to train their AI models and improve services. You can turn that off as you can with Chat GPT. Some data is kept for 72 hours, and um, yeah, it will limit your functionality. They're relying on you to be uploading prompts and questions and documents so that it can learn even better how to respond to you. For the Enterprise Gemini, when you're using workspace, your data is not used to train or improve Gemini models. It is not reviewed by humans. Prompts and generated content are not shared with other customers, uh, with the exception, which some nonprofits may be interested in this. I did not realize this, but there is a starter edition of Workspace for a 30-day trial. And if you're using Gemini during that 30 days, uh it says that it may use your data for training. So again, once you're in the fully paid uh subscription enterprise license, your data is kept within your organization. I guess the way to explain it is that it is it is still using its model to respond to you, but your things that you're uploading to it are not going out and becoming part of that model.

Carolyn Woodard:

So a difference in intellectual property and content ownership when you're using public Gemini, the ownership of that generated content is very complex. I'm not even going to begin to go into it. In terms of Enterprise Gemini, so like for copyright and such, you own your data, not Google. Generated output from Enterprise Gemini is considered to be customer data, meaning your AI-generated assets belong to your organization.

Carolyn Woodard:

Data residency and compliance, um, public Gemini, they just have it. Like you don't store it anymore. For Enterprise Gemini, it supports compliance with HIPAA, GDPR for Europe, and FedRAMP, which is what you need to be compliant with to work with federal government in the United States. It supports data residency policies and lets administrators control where data is stored. And again, like those permissions.

Carolyn Woodard:

So data scope. With public Gemini, it is limited to public web data and input from the current session. So you can only use things that are publicly available. With Enterprise Gemini, it operates within your secure private ecosystem. It can use your organization's data, internal documents, Gmail, anything on Drive. It can also connect to third-party data sources, so like your Salesforce. So that's something to consider.

Carolyn Woodard:

And then security controls, if you're using public Gemini, of course, there's no administrator. You're just out there in the public. So you have no control. For enterprise Gemini, it does have administrative controls, so IT can manage access, and that allows it to support data loss prevention and certain types of encryptions. So those are the quick uh updates on if you were confused about the differences between public and private AI tools for the two big players, Microsoft and Google. I just wanted to quickly go over it.

Carolyn Woodard:

If you're interested in other tools like Claude, Anthropics Claude, or Perplexity or Change AI, and you would want to just be aware, if you're creating a free account, you're essentially using the public version. If you purchase a subscription either for yourself through your workplace or for your workplace and enterprise version, then you have those additional protections. Your data is private to your organization. It's still super, super functional. So I want to emphasize that. It's just that, especially if you are concerned, you have sensitive information, you just want to protect your privacy and the privacy of your data, which of course everyone does, we highly strongly recommend to use the enterprise version of the AI tool that you're interested in.

Carolyn Woodard:

Of course, if you are using a tool that is updating you with AI, so for example, Zoom, uh Zapier, maybe your CRM, Salesforce, if they have AI and you like do the update and you now you have AI, that is also going to, for the most part, be controlled within your enterprise license for that tool for Zoom or Zapier or Salesforce. But you know, it's probably good practice to continue to look at the terms and conditions and just make sure that you're using AI safely at work. All right.

Carolyn Woodard:

Well, that's it for this episode, midweek check in on nonprofit AI. And I'll see you again on Friday with our regularly scheduled podcast. And again, every Tuesday we'll be coming back with some quick AI tips for nonprofits. Take care.