Skip to main content

Automating Discovery: How AI Helped Process 400+ Requirements

Photo of Luke Pekrul
Luke Pekrul - Project Manager
April 2, 2026

At Tag1, we believe in proving AI within our own work before recommending it to clients. This post is part of our AI Applied content series, where team members share real stories of how they're using Artificial Intelligence and the insights and lessons they learn along the way. Here, Luke Pekrul, Project Manager, shares how he used AI to automate requirements gathering for a complex CRM migration, transforming input from 30 stakeholders into a vendor-ready RFP while solving a critical problem.

The Discovery Problem

I was managing a CRM migration project for a nonprofit client moving from NetForum to Salesforce. The discovery phase meant gathering requirements from 30 stakeholders across six non-technical teams and turning all of it into a structured RFP that vendors could actually use - high-level summaries to convey scope quickly, with enough granular detail for accurate estimates.

Anyone who's done this kind of discovery knows the pain. You're pulling from interviews, meeting notes, and surveys, trying not to lose context, create duplicates, or miss requirements entirely. On top of that, neither I nor the stakeholders were domain experts in these specific CRMs, so there was a translation gap to bridge on both sides.

Building the Collection Pipeline

I started by using different tools to gather raw requirements through multiple channels.

Gemini for Google Docs parsed years of meeting notes and strategic plans to seed a master requirement list. ChatGPT helped me draft interview protocols tailored to each team. Pre-interview Google Form surveys identified hot topics, then I conducted meetings in Google Meet to generate full transcripts. Gemini pulled explicit and implied requirements directly from these transcripts into structured tables.

This tiered approach ensured nothing was overlooked. But that's when I hit a wall.

The Problem: AI Keeps Skipping Things

The models kept skipping requirements. They would scan a transcript and provide a top ten style list, but granular details disappeared. When I asked them to look for missing items, the models would either duplicate previous requirements or just stop, claiming they were finished even when 30+ requirements remained unaddressed.

This is a classic problem. The model thinks it's done, so it tells you it's done, and you have no way to verify that without reading every single line yourself.

I needed a way to force the models to maintain a strict inventory.

The Breakthrough: Force a One-to-One Mapping

The solution came from implementing what I call a Traceability Matrix constraint. Instead of asking the models to extract requirements, I forced them to produce a one-to-one table mapping every Original Source ID to a New Requirement ID.

This simple structural change eliminated data loss entirely. The models couldn't skip anything because they had to account for every source item. And the matrix provided an audit trail, so I could map consolidated RFP requirements back to the original interview notes.

The lesson: never ask a model "Did you miss anything?" It will almost always say no. Instead, force a structured audit that a non-technical user can verify with a simple spreadsheet lookup.

Five Specialized Agents

To prevent the models from becoming overwhelmed, I created five specialized agents in Gemini, each with a focused job:

Agent 1: AI RFP Advisor established the multi-step migration strategy and RFP structure.

Agent 2: The Refiner translated rough stakeholder needs into standardized User Stories using the Traceability Matrix.

Agent 3: The De-duper merged overlapping stories into high-level functional buckets.

Agent 4: RFP Writer drafted the formal Discovery Phase RFP language.

Agent 5: RFP Strategist critically reviewed the draft for deal-breakers or gaps that might lead to inflated vendor pricing.

Breaking the work into distinct stages kept each agent focused and prevented the kind of drift that happens when you throw everything at a single model.

What Changed

The workflow reduced core processing time by approximately 75%. What normally takes weeks was streamlined into days. The Traceability Matrix delivered 100% requirements coverage with zero gaps.

I ended up with a clean, vendor-ready RFP that accounted for every stakeholder input and maintained a clear audit trail back to the source.

Where Else This Works

This approach works for any project where you're collecting messy input from non-technical stakeholders and need to turn it into something structured and complete.

Large-scale migrations with a mix of current state and future state user stories across multiple functional areas benefit from this structure. So do maintenance backlogs where you're turning a year's worth of support tickets into a structured roadmap for a new fiscal year.

Client discovery phases benefit too, especially if clients are already using Google Workspace tools like Gemini for Google Docs and Google Meet transcripts. They can organize their internal needs before a project even begins.

Three Ways to Use AI as an Advisor

One of the most valuable lessons from this project was learning to use these tools at every stage, not just for execution.

  1. Workflow Strategy: Use the models to define the best way to use them for the specific task at hand.
  2. Debugging Responses: When output is puzzling or incomplete, explain the behavior and ask for help debugging it.
  3. Quality Assurance: Use a specialized agent to critically review the work of other agents. This helps catch problems early.

The Traceability Matrix was the key unlock, but the broader insight is that you can force these tools to be more rigorous by building in structural constraints they can't ignore.

What's Next

I plan to refine this workflow further and integrate it into our existing Tag1 discovery processes. The goal is to make our current requirements gathering even more efficient, especially for complex projects with non-technical stakeholders.

Interested in learning more about using AI to streamline discovery and requirements gathering? Reach out, we're happy to share insights and practical tips from this project and others.

This post is part of Tag1’s AI Applied content series, where we share how we're using AI inside our own work before bringing it to clients. Our goal is to be transparent about what works, what doesn’t, and what we are still figuring out, so that together, we can build a more practical, responsible path for AI adoption.

Bring practical, proven AI adoption strategies to your organization, let's start a conversation! We'd love to hear from you.

Work With Tag1

Be in Capable Digital Hands

Gain confidence and clarity with expert guidance that turns complex technical decisions into clear, informed choices—without the uncertainty.