Shadow AI usage
Employees are using AI tools that IT and security never approved. Customer data, source code, and internal documents are being entered into platforms with no oversight.
ChatGPT, Copilot, Claude, Gemini — your team is already using these tools. Customer data, source code, and internal documents are being entered into platforms you may not control. We find the exposure, assess the risk, and tell you exactly what to do about it.
CISSP · CRISC · CCSK · CCZT | 27+ years in cybersecurity | Fixed-fee | Veteran-owned
Most SaaS teams adopted AI tools before security had a chance to evaluate them. That gap is now a real business risk.
Employees are using AI tools that IT and security never approved. Customer data, source code, and internal documents are being entered into platforms with no oversight.
A developer pastes an API key into Copilot. A sales rep enters a customer list into ChatGPT. A support agent feeds ticket data into a summarizer. That data may be logged, retained indefinitely, or used to train the next version of the model — and you have no record it happened.
Most companies have no written policy for AI usage. No guidelines on what data can be entered, which tools are approved, or who is responsible for oversight.
Most AI vendors retain input data for some period. Some use it to train future models. Few companies have reviewed the actual terms. If your data enters a platform that trains on inputs, you may have lost control of that data permanently — and triggered contractual or regulatory obligations you didn't plan for.
We look at how your company is actually using AI — then give you a clear picture of the risk and a prioritized plan to address it.
We identify which AI tools are being used across your organization, by whom, and for what purposes. This includes sanctioned tools, shadow usage, and browser-based AI platforms.
We map what types of data are being entered into AI tools — customer records, source code, financial data, internal documents — and assess the risk of each data flow.
We review AI vendor terms of service, data retention policies, and model training practices to determine whether your data is being used in ways you haven't agreed to.
We evaluate whether your existing policies, controls, and processes adequately address AI usage. We identify what's missing and what needs to be created or updated.
Every engagement produces specific, actionable artifacts:
After the engagement, you have:
Identify exactly which sensitive data — customer records, source code, financial information — is entering AI platforms, and close those specific exposure paths.
Understand which AI vendors retain your data, which may use it for model training, and whether your contracts actually protect you.
Move from "we think people use ChatGPT" to a documented inventory with approved tools, prohibited data types, and clear employee guidelines.
When your SOC 2 auditor or enterprise customer asks "how do you manage AI risk?" — hand them a documented assessment, not a verbal answer.
Auditors are asking about AI. SOC 2 trust services criteria (especially confidentiality and security) now require companies to demonstrate they understand and control how data flows through AI tools. ISO 27001 Annex A controls around information classification and supplier relationships apply directly.
This assessment produces evidence and documentation that directly supports your audit readiness — whether you're starting compliance work or maintaining an existing certification.
Enterprise buyers are adding AI governance questions to security questionnaires. They want to know: what AI tools do you use? How do you control data flowing into them? What policies do you have?
The deliverables from this engagement give you documented answers to those questions — backed by a real assessment, not guesswork.
SaaS companies with 20–300 employees where teams are using AI tools in their daily work. If you have employees using ChatGPT, Copilot, or similar tools — and you don't have visibility into what data is being entered — this assessment is for you.
Smaller teams often have the highest risk because there are fewer controls in place. The assessment is sized to your environment. If you have 30 employees using AI tools with no policy, that's a real exposure — regardless of company size.
A penetration test looks for technical vulnerabilities in your systems. This assessment looks at how your people are using AI tools and where your data is going. These are business process and governance risks — not infrastructure vulnerabilities.
You don't need to be working toward SOC 2 or ISO 27001 to benefit from this. The assessment stands on its own as a practical security exercise. If you pursue compliance later, the findings and documentation carry forward.
Most assessments are completed in 2–3 weeks. We work efficiently and deliver a clear report — not a months-long consulting engagement.
Every engagement is fixed-fee, scoped to your situation. We'll discuss your environment on an initial call and provide a clear quote before any work begins. No hourly billing, no open-ended retainers.
Start with a free 30-minute call. We'll discuss your current AI usage, identify likely risk areas, and recommend a clear next step.