The popularity of generative AI tools and systems, such as ChatGPT, Claude, Gemini, and GitHub Copilot, has led many employees and business units to adopt them independently in the pursuit of greater productivity, faster analytics, or simple experimentation. Often, these adoptions occur outside of official governance, risk, and compliance processes, creating what is commonly referred to as “shadow AI”. Shadow AI describes the use of AI solutions within an organization without the knowledge, approval, or oversight of the IT department or the Information Security team. While this may boost efficiency in the short term, it also creates blind spots: unvetted tools can expose sensitive data, bypass security controls, or introduce regulatory and reputational risks. The scale of this issue is growing rapidly, with hundreds of thousands of LLM-based apps emerging in recent months and AI features now …