Tech Tidbit – The Challenges of Shadow AI

November 25th, 2025
Tech Tidbit – The Challenges of Shadow AI

A new insider threat is emerging in your network.

It is not a rogue employee.

It is not even a malicious attack.

It is your own users improperly using the emerging AI tools to work with and manipulate your district's protected data - personally identifiable information, protected health data, confidential data, and financial data.

Exposure comes innocently enough.   A user takes ChatGPT, Microsoft Copilot, or whatever their new favorite AI tool is, and drops your district's protected information into it to create a fantastic report or analysis, etc.

However, the reality is that those free, public AI tools don't provide the data confidentiality guarantees your district requires to be FERPA or EdLaw 2-d compliant and to meet the NIST CSF data-handling standards.

One of the most common issues reported by early AI adopters is the unintentional sharing of internal confidential data with a non-confidential AI tool.   The results can be devastating.    Suddenly, your user has shared your confidential data with the world.   We are already pulling our hair out trying to prevent data loss.   Now, anyone with a computer can, by accident, trigger a major data privacy or confidentiality breach.

There is a right way and a wrong way for your users to implement AI in your district successfully.   Please make this a significant part of your security awareness training and explain to your staff the district's approved tools and the proper way to use them.

Many of these commercial AI subscriptions claim to separate your data from their entire knowledge base and not share it outside your organization.

However, there are security threats that exist with AI - even if you use a secure AI tool.    Here are a couple I have read about recently:

  • Google Gemini being manipulated to hide malicious links in summaries.
  • Google Gemini disclosing data remotely with hidden prompts
  • Microsoft CoPilot disclosing data remotely with hidden prompts.
  • ChatGPT forgetting its guardrails and disclosing confidential data when asked for data in the format of a nursery rhyme.

There is a lot to talk about and unpack in the coming months.   However, please start by incorporating AI hygiene into your upcoming security awareness discussions as everyone returns to the classroom.

Acture is here to help you plan and roll out your AI initiatives.   Give us a call and let's begin the discussion.

Scott F. Quimby

Senior Technical Advisor, CISSP, vCISO

Acture Solutions, Inc.