FREE · NO SOW · NO COMMITMENT

The AI Readiness Assessment.
Thirty minutes of your time.

Two weeks of ours. You leave with a scored use-case shortlist, a cost range, a data-boundary review, and a written recommendation. Whether you build it with PacketNAP or not is your call.

30-min kickoff ~2 week turnaround Written deliverable No upsell
§01 · What you get

A written deliverable you can
actually take to your CFO.

Not a sales deck. A specific document that names the highest-leverage use case, the data it touches, the compliance posture, the build cost, and the ROI math.

01 / Output

Use-case shortlist, scored

3 to 7 candidate projects ranked by value, feasibility, data sensitivity, and time-to-production. You’ll know which one to pilot first, and which two to leave alone.

02 / Output

Cost range & effort estimate

Ballpark price for each use case: cloud-only, hybrid, or fully on-prem. Hardware sized. Licensing mapped. Timeline bracketed. No surprises when the proposal arrives.

03 / Output

Data-boundary review

Exactly which datasets each proposed AI would touch, where they’d flow, and which regulatory frameworks (HIPAA, SOC 2, PCI, MSAs) gate them. Red-flags called out.

04 / Output

Written recommendation

A plain-English memo from an engineer, not a sales rep. It tells you what to build, whether to use cloud or local, and whether you even need an outside team to do it.

§02 · Protocol

How the two weeks go.

Most of the work happens on our side. You’re on two short calls plus a data walkthrough. Everything else is asynchronous.

Day 0 · 30 min · Kickoff call

Scope the conversation

What you’re trying to solve, what you’ve already tried, what your board is asking about. We take notes. No deck, no pitch.

Day 1–3 · Discovery

Data & stack walkthrough

A single async working session with your IT lead. We look at what systems you run, what data lives where, who governs it, and what’s off-limits.

Day 4–9 · Analysis

Scoring & options

We score candidate use cases against value, feasibility, data sensitivity, and integration effort. We draft the cost ranges and flag regulatory constraints.

Day 10–12 · Writeup

Written recommendation

Delivered as a PDF and a shared doc. Roughly 6 to 12 pages. You read it. We go through it together if you want, or you take it and run.

Day 14 · Readout

30-min debrief (optional)

You ask questions. We answer them. We do not attempt to sell you anything. If the recommendation is “don’t build this right now,” the deliverable will say so.

§03 · Scope

What we actually dig into

Everything below comes out of the kickoff and discovery sessions. Nothing sits in a “TBD” column at the end.

THEME 01

Data posture

  • Where your data lives today
  • Which datasets an AI would need
  • MSAs, contracts, sub-processors
  • HIPAA, SOC 2, PCI, export rules
THEME 02

Use-case fit

  • Helpdesk / service desk
  • Customer-facing chat & voice
  • Data & reporting agents
  • Copilot rollout & governance
THEME 03

Infrastructure

  • Cloud, on-prem, or hybrid
  • GPU sizing for local models
  • Integration with existing stack
  • Network, storage, observability
THEME 04

Model selection

  • Llama, Qwen, Mistral, DeepSeek
  • Claude, GPT, Copilot enterprise
  • Latency, cost, quality tradeoffs
  • vLLM, Ollama, llama.cpp fit
THEME 05

Governance & risk

  • Hallucination controls
  • Audit logging & SSO
  • Red-team & eval plan
  • Incident response path
THEME 06

ROI framing

  • Metric candidates
  • Baseline capture plan
  • Payback window estimates
  • Scale-out triggers
§04 · Qualifying criteria

Is this a fit?

We say no to about a third of the assessment requests we get. Here’s how we decide.

Good fit

  • IT team of 5 to 200 at a real business (not a 3-person startup)
  • You have a concrete problem: ticket pain, compliance pain, customer-service pain, reporting pain
  • Your data boundary matters (HIPAA, SOC 2, PCI, MSA constraints, legal privilege)
  • You want to ship something, not run a year-long strategy engagement
  • You already run infrastructure. You understand SLAs, logs, and pagers

Not a fit

  • “We want to do AI but don’t know why” (come back when you have a problem)
  • You want a $500K strategy deck and a roadmap. That’s not what we do
  • Looking for resale of a SaaS AI product with a markup
  • You expect to keep the free assessment hostage with weeks of scoping calls
  • Early-stage startup with no data, no users, no compliance floor to defend
§05 · Request the assessment

Tell us a little about your stack.

We read every submission. If it looks like a fit, an engineer replies within one business day with a kickoff calendar link. If it doesn’t, we’ll tell you why.

Request kickoff

In the message box, tell us briefly what you are trying to solve and which use cases are on your mind. We reply within one US business day.

Submitting does not create an obligation on either side. No data from this form is used to train any AI model, ever.

§06 · Q & A

Frequently asked questions

Is the assessment actually free, or is there a catch?

It’s actually free. There is no catch. We spend roughly 20 to 30 hours per assessment on our side, and we accept that cost because the deliverable makes it obvious whether we should work together. If the recommendation is “build this in-house with your own team,” we write that in the memo and move on. About one in three assessments end that way.

What do you need from us to start?

A 30-minute kickoff call, a 60 to 90 minute async data walkthrough with your IT lead, and the patience to read a 6 to 12 page document at the end. That’s it. No access to your systems required. No credentials. No production data.

Will you sign an NDA before we share specifics?

Yes. We use a mutual NDA template that most legal teams approve in a day. If yours has its own, we’ll sign that one as long as it’s reasonable. Just flag it when you request the assessment.

What if we already have a vendor in mind?

That’s fine. We’ll score the vendor against the same framework we score our own approach with, and tell you whether it holds up. Sometimes the recommendation is to go with someone else. We would rather tell you that up front than waste a year on a doomed project.

We’re not a PacketNAP hosting customer. Does that matter?

No. The assessment is open to anyone serious about deploying AI. If you later want to host the build on PacketNAP hardware, great. If you want to run it in your own datacenter or on Azure, also great. The recommendation memo is vendor-neutral.

What happens after the assessment?

One of three paths. Path A: you take the memo and build it in-house. Path B: you engage us for a Private AI Starter (one focused use case, 2 to 4 weeks to production, low six figures). Path C: the memo says “not yet” and we part ways with a written rationale. Roughly one in three ends in Path C. That’s on purpose.