AI Risk Assessment · Fixed-Fee Engagement

Your employees are using AI every day. Do you know where the data is going?

ChatGPT, Copilot, Claude, Gemini — your team is already using these tools. Customer data, source code, and internal documents are being entered into platforms you may not control. We find the exposure, assess the risk, and tell you exactly what to do about it.

CISSP · CRISC · CCSK · CCZT  |  27+ years in cybersecurity  |  Fixed-fee  |  Veteran-owned

CISSP Certified Information Systems Security Professional
CRISC Certified in Risk and Information Systems Control
CCZT Zero Trust Architecture
27+ years across DoD, energy, finance, and SaaS
The problem

AI tools are in your environment — with or without approval

Most SaaS teams adopted AI tools before security had a chance to evaluate them. That gap is now a real business risk.

Widespread

Shadow AI usage

Employees are using AI tools that IT and security never approved. Customer data, source code, and internal documents are being entered into platforms with no oversight.

High risk

Data leaking into AI platforms

A developer pastes an API key into Copilot. A sales rep enters a customer list into ChatGPT. A support agent feeds ticket data into a summarizer. That data may be logged, retained indefinitely, or used to train the next version of the model — and you have no record it happened.

Common gap

No AI policy or governance

Most companies have no written policy for AI usage. No guidelines on what data can be entered, which tools are approved, or who is responsible for oversight.

Overlooked

Vendor training and retention risk

Most AI vendors retain input data for some period. Some use it to train future models. Few companies have reviewed the actual terms. If your data enters a platform that trains on inputs, you may have lost control of that data permanently — and triggered contractual or regulatory obligations you didn't plan for.

What we do

A structured assessment of your AI risk

We look at how your company is actually using AI — then give you a clear picture of the risk and a prioritized plan to address it.

Discovery

AI usage inventory

We identify which AI tools are being used across your organization, by whom, and for what purposes. This includes sanctioned tools, shadow usage, and browser-based AI platforms.

Analysis

Data exposure assessment

We map what types of data are being entered into AI tools — customer records, source code, financial data, internal documents — and assess the risk of each data flow.

Vendor risk

AI vendor and training risk review

We review AI vendor terms of service, data retention policies, and model training practices to determine whether your data is being used in ways you haven't agreed to.

Governance

Policy and governance gap analysis

We evaluate whether your existing policies, controls, and processes adequately address AI usage. We identify what's missing and what needs to be created or updated.

Deliverables

What you get

Documented outputs — not slide decks

Every engagement produces specific, actionable artifacts:

  • AI Tool Inventory — every tool cataloged by team, data type, and risk level (e.g., "Engineering uses Copilot with access to private repos — high risk")
  • Data Exposure Findings — specific examples of data leaving your environment, with severity ratings and business impact
  • Vendor Training & Retention Review — which vendors retain your data, which may use it for training, and what your contracts actually say
  • Prioritized Remediation Plan — what to fix this week, this month, and this quarter — with specific actions, not vague recommendations
  • Executive Summary — a one-page overview for leadership, board, or investor reporting

What you can do with this

After the engagement, you have:

  • A complete picture of which AI tools process your data and how
  • Evidence of data exposure risks — with documented proof, not assumptions
  • A remediation roadmap your team can start executing immediately
  • Documentation that answers customer security questionnaire questions about AI
  • A foundation for SOC 2 or ISO 27001 compliance if you pursue it later
  • Optional: A ready-to-implement AI Acceptable Use Policy tailored to your company
Outcomes

How this helps your business

Stop data leakage

Identify exactly which sensitive data — customer records, source code, financial information — is entering AI platforms, and close those specific exposure paths.

Know your vendors

Understand which AI vendors retain your data, which may use it for model training, and whether your contracts actually protect you.

Visible controls

Move from "we think people use ChatGPT" to a documented inventory with approved tools, prohibited data types, and clear employee guidelines.

Answer the audit

When your SOC 2 auditor or enterprise customer asks "how do you manage AI risk?" — hand them a documented assessment, not a verbal answer.

Compliance connection

How this connects to your compliance goals

If you're pursuing SOC 2 or ISO 27001

Auditors are asking about AI. SOC 2 trust services criteria (especially confidentiality and security) now require companies to demonstrate they understand and control how data flows through AI tools. ISO 27001 Annex A controls around information classification and supplier relationships apply directly.

This assessment produces evidence and documentation that directly supports your audit readiness — whether you're starting compliance work or maintaining an existing certification.

If customers are asking about AI governance

Enterprise buyers are adding AI governance questions to security questionnaires. They want to know: what AI tools do you use? How do you control data flowing into them? What policies do you have?

The deliverables from this engagement give you documented answers to those questions — backed by a real assessment, not guesswork.

Need a full compliance gap assessment? →

Engagement details

How the engagement works

Scope and timeline

  • Duration: 2–3 weeks, depending on company size and tool complexity
  • Format: Remote engagement — interviews, tool review, documentation analysis
  • Pricing: Fixed-fee, scoped to your environment. No hourly billing.
  • Starts with: A free 30-minute consultation to understand your situation

What to expect

  • Initial call to understand your environment and AI usage
  • Clear scope and fixed quote before any work begins
  • Structured assessment with minimal disruption to your team
  • Final report with findings, recommendations, and executive summary
  • Debrief call to walk through results and next steps
FAQ

Common questions

Who is this assessment for?

SaaS companies with 20–300 employees where teams are using AI tools in their daily work. If you have employees using ChatGPT, Copilot, or similar tools — and you don't have visibility into what data is being entered — this assessment is for you.

We're a small team — is this overkill?

Smaller teams often have the highest risk because there are fewer controls in place. The assessment is sized to your environment. If you have 30 employees using AI tools with no policy, that's a real exposure — regardless of company size.

How is this different from a penetration test?

A penetration test looks for technical vulnerabilities in your systems. This assessment looks at how your people are using AI tools and where your data is going. These are business process and governance risks — not infrastructure vulnerabilities.

What if we're not pursuing compliance yet?

You don't need to be working toward SOC 2 or ISO 27001 to benefit from this. The assessment stands on its own as a practical security exercise. If you pursue compliance later, the findings and documentation carry forward.

How long does it take?

Most assessments are completed in 2–3 weeks. We work efficiently and deliver a clear report — not a months-long consulting engagement.

What does it cost?

Every engagement is fixed-fee, scoped to your situation. We'll discuss your environment on an initial call and provide a clear quote before any work begins. No hourly billing, no open-ended retainers.

Find out where your AI risk is — before it becomes an incident

Start with a free 30-minute call. We'll discuss your current AI usage, identify likely risk areas, and recommend a clear next step.

Also available: Vendor Security Review Sprint · Security Program Gap Assessment