Skip to main content

Building the Dashboard Collaborator: Getting Every Voice Heard Before the First Dashboard Meeting

Photo of Nwokoro Duru Ahanotu
Nwokoro Duru Ahanotu - Data Strategist
April 16, 2026

At Tag1, we believe in proving AI within our own work before recommending it to clients. This post is part of our AI Applied content series, where team members share real stories of how they're using Artificial Intelligence and the insights and lessons they learn along the way. Here, Dr. Duru Ahanotu, Data Strategist, documents his four development sessions building the Dashboard Collaborator, an AI-powered assistant that guides teams through the entire process of designing and building a data dashboard.

The Problem With Dashboard Projects

If you've ever been part of a project where someone needed a data dashboard — the charts and numbers that help teams make decisions — you know how painful the process can be. People have meetings. They disagree about what the dashboard should show. Someone builds it. It's wrong. They fix it. It's still not quite right. Weeks pass.

We are building a tool to fix that. The Dashboard Collaborator is an AI-powered assistant that guides teams through the entire process of designing and building a data dashboard, from the first conversation about what they need all the way to a finished product. Instead of relying on scattered meetings and email chains, the AI handles the interviews, organizes everyone's ideas, spots disagreements early, and even helps write the technical code.

This post covers all four development sessions, the full arc from first proof of concept to a working end-to-end tool.

The First Question: Can We Make This Live in Slack?

Think of this first session as laying the foundation of a house. You can't see much from the street yet, but everything that comes next depends on getting this part right.

Three things came together in this session.

Teaching the computer to listen. We installed a tool called Whisper, an AI developed by OpenAI. Whisper can listen to a voice recording and convert it to text with impressive accuracy. This matters because speaking is faster and more natural than typing, especially when you're trying to explain a complex idea.

Building a interivew bot that lives in Slack. Rather than asking people to learn a new tool, we brought the Dashboard Collaborator directly into Slack, the messaging app many teams already use every day. We created a bot and gave it the right permissions to send and receive private messages.

Connecting the bot to Claude. When a team member launches the Dashboard Collaborator in Slack, the AI takes the lead. It introduces itself, explains its purpose, and asks the first question. From there, it guides the entire interview — reading the full conversation history, building on everything that has been discussed, and steering the conversation back on track when it drifts. Every message is saved to a file automatically, so if you close Slack and come back tomorrow, the bot picks up exactly where you left off. Nothing is lost.

This is one of the things that makes a one-on-one AI interview different from a group meeting. Getting a single person back on track in a conversation with an AI is a much simpler problem than doing the same thing in a room full of conflicting personalities, competing priorities, and charged opinions. The AI does not get frustrated, does not take sides, and does not lose the thread.

By the end of the session, a real conversation had happened. A team member launched the bot, and within seconds the Dashboard Collaborator was running the interview — warmly, intelligently, and on task. The saved conversation file confirmed it: every exchange was recorded cleanly and correctly.

Two obstacles came up during setup. A software conflict caused a known compatibility error on Windows, resolved by installing one of the tools differently. A file location issue inside a OneDrive folder was fixed by moving things to a simpler location on the hard drive. Each obstacle was resolved the same session it appeared. No blockers carried over.

What we built that day was small but significant. It proved that the core idea works. A person can open Slack, speak or type to an AI assistant, and have a real, intelligent, context-aware conversation that is automatically saved and ready to build on. Every feature from here — voice input, structured interview questions, automated summaries, technical code generation — builds on this foundation.

Giving the Dashboard Collaborator Ears

The goal for this session was exciting: give the Dashboard Collaborator ears. We wanted to speak to it out loud using Slack's built-in voice recording and have it understand, think, and write back.

That goal was achieved. By the end of the session, you could pick up a microphone, say something to the bot, and receive an intelligent, context-aware written response. No typing required.

Here is what the experience looks like in practice. The interview has already been configured with the project's goals — set up by a product owner or project lead before any stakeholder sits down. So at your convenience you open the direct message with the Dashboard Collaborator in Slack. Instead of typing, even though typing is always an option, you click the microphone icon and start answering. Maybe the bot is asking what data you look at most often, or what decisions the dashboard needs to support. You speak your answer and send it. Within seconds, the bot responds in the chat. First, it says it received your voice message and is transcribing it. Then it shows you what it heard — your words converted to text — so you can confirm it understood you correctly. Then it responds thoughtfully, asking follow-up questions to dig deeper into your needs. The whole thing feels like leaving a voicemail and receiving a smart, written reply almost instantly.

Getting the voice to work turned out to be a multi-layered puzzle.

The file format problem. When Slack saves a voice message, it packages the audio in a specific file format. The bot was initially trying to open it like a different kind of file — like trying to open a Word document as if it were a PDF. The fix was teaching the bot to correctly identify what kind of audio file Slack was sending and handle it accordingly.

The locked door problem. Even after fixing the file format, the bot kept downloading what appeared to be an audio file but was actually an HTML web page — a "you don't have permission" error page in disguise. Slack protects its stored files so that only authorized apps can access them. The bot was knocking on the right door but not showing its credentials properly. Switching to a more secure Slack API method that properly authenticates the download fixed it.

The permissions problem. Even after fixing the download method, Slack said the bot didn't have the right permission to look up file information. The permission had been added on paper but not yet officially applied — like being granted a new key card but not having it activated. Reinstalling the app to the workspace activated it, and everything clicked into place.

After working through those three layers, one voice message saying hello produced a warm, detailed introduction from the bot, explaining its purpose, listing the kinds of questions it would help work through, and asking if we were ready to dive in. It addressed the user by name. All of that from a single spoken word.

A discovery worth calling out from this session: the bot is powered by Claude, an AI trained on an enormous body of human knowledge, including software documentation, business analysis best practices, and how good interviewers ask questions. When we tell it "you are conducting a requirements interview for a dashboard project," we are not teaching it what that means. It already knows. We are simply telling it which part of its knowledge to draw on, and how to behave while doing so. It's a bit like collaborating with an expert colleague who already has twenty years of experience in your exact field. You don't need to train them from scratch; you just brief them on the specific engagement. This means that as we refine the instructions we give the bot, we are not building intelligence from the ground up. We are shaping and directing intelligence that already exists. That is a powerful and efficient foundation to build on.

Turning a Chatbot Into an Interviewer

There is a big difference between a chatbot and an interviewer. A chatbot responds to whatever you say. An interviewer has a job to do. It knows what information it needs to collect, what order to collect it in, when to follow a thread and when to bring the conversation back on track, and when it has enough to move on.

This session gave the Dashboard Collaborator all of that.

The interview bot [The bot] follows a structured interview process called the Interview Spine — eight stages that take a collaborator from a simple greeting all the way through to a finished set of requirements documents. Each stage has a purpose, a list of topics to cover, and rules for how to handle the unexpected. The bot knows which stage it is in at every moment and only focuses on the work of that stage, nothing more, nothing less.

As the bot grew more capable, the code behind it grew more complex. We made a deliberate decision to keep that complexity manageable by splitting the code into five separate files, each with one clear job. Think of it like a well-run kitchen. The chef does not also do the dishes, manage the inventory, and seat the customers. Each person has a role. The five files work the same way: one handles the conversation flow, one handles saving and loading files, one handles the pause and restart commands, one handles voice transcription, and one holds all the interview instructions.

Here is what the bot could do by the end of this session that it could not do before.

It follows a structured interview. The bot works through eight defined stages in order. It knows where it is and what it needs to accomplish at each step: welcome and orientation, project context, business need, audience, data questions, data sources, loose ends, and summary. It knew where it was and what it needed to accomplish at each step.

It handles vague answers. If a collaborator gives an unclear or incomplete answer, the bot probes for clarification, but only twice. After two attempts, it notes the gap and moves on. Nothing gets stuck.

It follows tangents and returns. If a collaborator brings up something interesting that goes beyond the current question, the bot follows that thread briefly, extracts the useful information, and then explicitly returns to where it left off. It never gets lost.

It remembers everything across sessions. If a collaborator pauses and comes back the next day, the bot picks up exactly where it left off, recapping what was covered and continuing from the right stage.

It stays strictly on task. We tested this directly. When a collaborator asked for emotional support during the interview, the bot declined to engage and redirected to the interview. When a collaborator pointed out a repeated question, the bot acknowledged it and moved on without debate. When a collaborator asked the bot a clarifying question, it answered briefly and returned immediately to the interview.

It keeps separate records for every collaborator. Each person who does an interview gets their own file, named with their name and the date. Multiple interviews can happen simultaneously without any risk of one person's answers affecting another's.

Testing was thorough and deliberate. We ran twelve distinct scenarios, covering every major behavior from basic file creation all the way to emotional deflection. Every single test passed. The most satisfying moment was watching the bot handle a collaborator who kept giving vague, non-committal answers. The bot probed twice, noted the gap, and moved on cleanly — exactly as designed. No loops, no frustration, just forward progress.

One thing worth noting: the intelligence behind the bot's interview behavior is not hard-coded logic. It was a set of plain language instructions, written in plain English, that the AI read and followed on every exchange. This means that improving the bot's interview behavior in the future is largely a matter of refining those instructions, not rewriting code. The system is designed to be tuned and improved over time with relatively low effort.

Fixing the Rough Edges and Building the Synthesis Engine

Coming into this session, the bot could conduct a structured requirements interview, handle voice messages, pause and resume sessions, and produce output files at the end. The foundation was solid, but there were rough edges — and one major missing piece: the bot could collect input from one person at a time, but there was no way to bring all those individual interviews together into a single, unified picture.

This session fixed the rough edges and built that missing piece.

When the Bot Started Talking to Itself

One of the persistent bugs was the bot occasionally sending raw computer code — specifically JSON, a structured data format — directly into the Slack chat instead of a normal conversational message. Imagine asking someone a question and instead of answering, they hand you a spreadsheet of internal notes. That is what was happening.

The root cause was that the bot was asking Claude to do two different jobs at the same time: hold a conversation and update its internal records. Those two jobs have very different output formats, and Claude was sometimes getting them mixed up. The fix was to split those two jobs into two completely separate requests. The first request says: talk to the person, write a normal response. The second request, which happens silently in the background, says: update your internal records in structured format. The person in the chat only ever sees the output of the first request. If the first request ever accidentally produces structured data anyway, a new safety check catches it and suppresses it before it reaches Slack.

The Awkward Silence After Every Interview

After a completed interview, there was a moment that felt like the bot had crashed. It would send a message saying it was ready to prepare a summary, and then nothing would happen. The user would sit there wondering if something had gone wrong. Only after sending another message would the summary finally appear.

This was simply a matter of the bot waiting for a signal it did not actually need. The fix: after sending that handoff message, generate the summary immediately. Do not wait. The result is a seamless experience. The handoff message and the full summary now appear one after the other within a few seconds.

The Bot That Couldn't Stop Saying Goodbye

A related problem was that at the end of the interview, the bot was wrapping up the conversation on its own — thanking the person for their time, wishing them well, and signing off — without ever presenting the summary or asking for confirmation. The interview would end without producing any output.

This turned out to be the AI acting on its natural instinct to close a conversation politely. The instructions had to be very explicit: you are not allowed to close this interview. Ever. That is not your job. Your only job at the end is to hand off to the summary stage. That rule was added in two places in the instructions to make sure it stuck.

A Testing Shortcut Worth Keeping

Running a full interview from start to finish takes 30 to 60 minutes. Doing that every time to test something at the end of the process is impractical. We added a shortcut command for testing: typing skip to 5, for example, jumps the bot directly to Stage 5, fills in the earlier stages with placeholder notes, and picks up the interview from there. This made it possible to test the summary and output generation features in minutes rather than hours.

What Happens When Five People All Have Different Opinions

This was the centerpiece of the session. Up to now, the bot could interview one person and produce a set of notes for that person. But a real requirements process involves multiple stakeholders, each with their own perspective, their own priorities, and sometimes their own conflicting opinions about what the dashboard should do.

We built the Synthesis Engine. A project leader types compile in Slack, and the bot automatically gathers all the interview output files, reads them, and produces four synthesized documents that represent the combined input of everyone who was interviewed.

The synthesis follows a strict set of rules designed to be fair and complete. Nothing is deleted. Nothing is invented. The bot does not editorialize or offer opinions. It does not treat the process like a vote where the majority wins. A view held by only one person is just as valid as a view held by five. Where people agree, the synthesized document states the finding directly. Where people disagree, the conflict is explicitly flagged for the group to resolve together.

The four synthesized documents work like this:

The raw notes collect all interviews into a single document with each contributor's section clearly labeled. No processing, no interpretation — just all the original notes in one place.

The brief is rewritten as a single unified narrative. Where two interviewees described the purpose of the dashboard the same way, that description appears once. Where they described it differently, both descriptions appear with a clear conflict flag for the group to resolve.

The user stories work similarly. Stories that are essentially asking for the same thing are merged into one. Stories that are asking for conflicting things are preserved and flagged in a specific format that makes the conflict easy to read and discuss.

The metrics document integrates measurements that are clearly the same thing under the same name. A metric defined differently by multiple stakeholders is preserved with all its definitions listed. It's flagged as potentially having multiple meanings that must be resolved before the dashboard can be built.

What the Tool Can Do Right Now

The Dashboard Collaborator can now:

  • Conduct a full structured requirements interview with any number of stakeholders
  • Accept typed or spoken input
  • Pause, resume, and restart sessions
  • Generate four individual output documents per interviewee
  • Compile all interviews into four synthesized documents ready for a group review session
  • Flag every conflict explicitly so nothing slips through unresolved

This completes Phase 1. The bot is a functioning end-to-end requirements collection and synthesis tool.

Phase 2: Making It Work Across Teams

Phase 1 assumed that all interview files would live in one folder on one machine. Phase 2 will tackle the real-world scenario where team members are conducting interviews on different computers in different locations. That means either running the bot in the cloud or uploading the final outputs to the cloud for processing — so the Synthesis Engine has everything it needs in one place.

There are also a handful of smaller polish items — things that work but could be smoother — that were noted during testing and deferred to the next phase. One of those is packaging the bot so it runs automatically in the background on Windows, with no need to open a terminal and start it manually.

The engine is running. But collecting and synthesizing requirements is only the beginning. The real goal is what comes after, turning that final requirements document into a prototype data model for review, and from there, a prototype dashboard ready for approval.

This post is part of Tag1’s AI Applied series, where we share how we're using AI inside our own work before bringing it to clients. Our goal is to be transparent about what works, what doesn’t, and what we are still figuring out, so that together, we can build a more practical, responsible path for AI adoption.

Bring practical, proven AI adoption strategies to your organization, let's start a conversation! We'd love to hear from you.

Work With Tag1

Be in Capable Digital Hands

Gain confidence and clarity with expert guidance that turns complex technical decisions into clear, informed choices—without the uncertainty.