AI Is Here. Is Your Insight Practice Ready?

AI is already being used across research workflows often in small, scattered ways. But real scale will come when your team is using AI with clarity, consistency, and quality and whether your systems can support that use at scale.

These six markers will help you assess if your insight practice is truly ready to work with AI and what you need to do to get there.

1. You Know Exactly Where You Want to Use AI

Not everywhere. Not “wherever it helps.” Specific steps, specific formats. But which parts of the research workflow do you want AI to touch? This can change over time, but it can't be inconsistent within the team. Drafting discussion guides? Tagging qual data? Generating Topline Reports?

If not: Pick 2 points in your current workflow this month where you want consistent AI use. Make it official. No experiments beyond scope.

2. You’ll Use It Consistently, Not Occasionally

AI doesn’t work well when it’s “sometimes used” and “sometimes ignored.” It works when it’s built into the rhythm of how things get done. Are certain tasks (e.g., first-draft screener, summary of 5 interviews) always done with AI? Or does it depend on who’s running the project or how much time they have? Is there a standard flow everyone starts from?

If not: Choose one recurring task. Define how AI will be used there, by default. Document it, train for it, review it weekly.

3. The Team Uses It the Same Way

A team where one person uses GPT-4 and another uses zero-shot Claude with a random prompt is not “AI-ready.” That’s experimentation not adoption. Do you have shared AI use frameworks? Are outputs reviewed and improved as a group? Can you trust a junior and a senior researcher to get comparable outcomes?

If not: Host one 30-min working session per week where your team uses the same AI on the same input and compares results. Learn together. Standardize what works.

4. You Prepare Your Data So AI Can Actually Work

Garbage in, garbage out. And with no human in the loop, garbage can slip through to your stakeholders. Are your transcripts, notes, past reports clean, well-tagged, and accessible? Have you cleaned your input libraries, deleted duplicates, added tags, clarified terms?

If not: Start by choosing one study repository. Re-tag it with themes, audience type, and outcome. Use this clean set as your base AI input. Set a process so that this is done repeatedly

5. You’re Willing to Invest in Better Inputs

With AI, the quality of your thinking is only as good as the quality of what you feed it. Are your briefs clear? Do they include context the AI needs? Are you documenting decisions made during fieldwork so AI can reference them later? Are your interviews and recordings of a quality that AI can cleanly interpret?

If not: Invest in technology and prep to get clean, high quality inputs.

6. You Know How and When to Verify AI Outputs

AI-generated outputs vary in risk. Some things can go out with light review. Others, if wrong, can mislead your team or client. If you haven’t agreed, as a team, on which is which, you’re not ready. Have you defined which AI outputs are “review required” vs “review optional”? Is there a standard for what counts as a verified output (e.g. cross-check, source cited, QA tag)? Does your team know how to validate things like: Thematic accuracy, Logical jumps, Misquoted verbatims, Overconfident summaries? Do you ever log failures or corrections so the system gets better?

If not: Conduct a sandbox to establish this and set common standards.

These are practical steps you can start this week. If your team aligns on just one of these six areas, you’ll reduce friction, improve output quality, and move from scattered AI usage to something that’s actually scalable.

Start small. Stay consistent. Make it real.