Community IT Innovators Nonprofit Technology Topics

Managing AI Risks at Nonprofits with Peter Campbell

Community IT Innovators Season 6 Episode 33

Peter Campbell is the principal consultant at Techcafeteria, a micro-consulting firm dedicated to helping nonprofits make more affordable and effective use of technology to support their missions. He recently published a free download powerpoint on Managing AI Risk and had time to talk with Carolyn about his thoughts on developing AI policies with an eye to risk, where the greatest risks lie for nonprofits using AI, and how often to review your policies as the technology changes rapidly.


The takeaways: 

  • AI tools are like GPS (which is itself an AI). You are the expert; they are not able to critically analyze their own output even though they can mimic authority. 
  • Using AI tools for subjects where you have subject expertise allows you to correct the output. Using AI tools for subjects where you have no knowledge adds risk. 
  • Common AI tasks at nonprofits move from low-level risks such as searching your own inbox for an important email to higher-risk activities more prone to consequential errors, such as automation and analysis.
  • Common AI risks include inaccuracy, lack of authenticity, reputational damage, and copyright and privacy violations.
  • AI also has risk factors associated with audience: your personal use probably has pretty low risk that you will be fooled or divulge sensitive information to yourself, but when you use AI to communicate with the public, the risk increases for your nonprofit.


How to Manage AI Risks at Nonprofits? 

  • Start with an AI Policy. Review it often as the technology and tools are changing rapidly.
  • Use your own judgement. A good rule of thumb is to use AI tools to create things that you are already knowledgeable about, so that you can easily assess the accuracy of the AI output. 
  • Transparency matters. Let people know AI was used and how it was used. Use an “Assisted by AI” disclaimer when appropriate.
  • Require a human third party review before sharing AI created materials with the public. State this in your transparency policy/disclaimers. Be honest about the roles of AI and humans in your nonprofit work.
  • Curate data sources, and always know what your AI is using to create materials or analysis. Guard against bias and harm to communities you care about.
“I’ve been helping clients develop Artificial Intelligence (AI) policies lately. AI has lots of innovative uses and every last one of them has some risk associated with it, so I regularly urge my clients to get the policies and training in place before they let staff loose with the tools. Here is a generic version of a powerpoint explaining AI risks and policies for nonprofits. “

Peter Campbell, Techcafeteria

_______________________________
Start a conversation :)

Thanks for listening.