Skip to main content

Jesse: The Missing User Manual

Photo of Hank VanZile
Hank VanZile - Sr. Director, Customer Experience
May 6, 2026
At Tag1, we believe in proving AI within our own work before recommending it to clients. This post is part of our AI Applied content series, where team members share real stories of how they're using Artificial Intelligence and the insights and lessons they learn along the way. Here Tag1's Sr. Director, Customer Experience, Hank VanZile shares how he adopted Jesse, a personal AI assistant built on plain markdown files, and used conversation rather than configuration to turn it into a single source of truth across email, Slack, GitHub, and calendar.

I lead Customer Experience at Tag1, which in practice means I set marketing strategy, manage key client relationships, and sit on the company's leadership team. I also context-switch between the execution work that comes with all of those - reviewing case study drafts, merging pull requests for tag1.com, administering our CRM, etc. (And that's just the professional side; I'm not even talking about my daughter starting college in the fall, or my aging parents' medical appointments, or the older New England house that's always developing a new opinion about deferred maintenance. Not yet, anyway.)

I'm comfortable wearing a lot of hats, but what often weighs on me is the cognitive load of tracking too many things across too many places - the follow-up email I was sure I'd sent but hadn't, the open PR I kept meaning to review, the dozen Slack messages I'd "saved for later." A low, steady feeling of being scattered bothers me more than any one missed item ever could.

How Jesse Got Into My Life

In early March 2026, Tag1’s Founding Partner/CEO Jeremy Andrews published a post called "Building My AI Assistant." The premise was appealing in its simplicity: a personal AI assistant ("Jesse") built on plain markdown files, with an instruction document and a handful of guidelines shaping the assistant's behavior. Your data stays in a "vault" you control, the assistant reads the folder, processes your instructions via an LLM, and documents every outcome in those markdown files.

I was already a regular user of AI and was beginning to tinker with Cowork, so I set up my own vault the following week - copied the structure, personalized the instruction file, configured the connectors - and by any reasonable measure I was up and running.

And then I kind of stalled.

Mind the Gap

I was using Jesse here and there, but I was struggling. I kept Slacking Jeremy variations of "how do I do X?" and "why do the docs say Y when Jesse is doing Z?" Every time, he'd reply with some version of "just tell Jesse to do it the way you want it," which sounded like a non-answer... until it wasn't.

The thing is, every other system I've adopted in my career has had some version of the same bargain: you learn the tool's way of doing things, you commit its quirks to memory, and when it doesn't work the way you want maybe you can change some configuration but mostly you adapt. Jesse inverts that. When it does something you don't like, you tell it, and it changes - not as a workaround or a preference toggle, but because the conversation itself is the configuration layer. It goes and edits its own instruction file based on what you said, and the next morning it works differently.

That simultaneously felt completely comfortable and completely revolutionary. I've been typing into chat boxes since before I started my professional career - to talk to people, mostly, or occasionally to navigate one of those choose-your-own-adventure support agents that walks you through a scripted decision tree - but I'd never used a chat interface where the conversation permanently changed how the tool behaved.

Here are two examples of what this looks like in practice:

Jesse's codebase lives on GitHub, and Jeremy pushes refinements upstream as the system evolves. With most other tools, integrating those refinements would mean using git to pull the latest updates and resolving merge conflicts, followed by a bunch of testing to make sure your configuration and personalizations were still in place. With Jesse, you just ask the assistant to look at what's changed in the source repository. Jesse reads the updates, tells you what's new, flags conflicts with your existing configuration, and you decide what to keep. The usual version control workflow is replaced by a conversation where the tool explains what it's about to do and asks whether you agree.

Or here's something more personal: when I'm using an LLM to help organize a complex task or think through a problem, there's a natural tendency for the AI to ask a lot of clarifying questions. That's ultimately good, I want it to have enough context, but it usually presents those questions as a comprehensive list: "Here are seven things I need you to answer." That doesn't work for me, because the answer to question one often changes or eliminates questions three through five, but I ultimately feel pressure to answer them in order and it leads to cognitive dissonance. So I told Jesse to stop doing that, and this is what ended up in my instruction file:

Work through questions conversationally, not as lists. When research, analysis, or any multi-question scenario requires Hank's input, do not present a batch of questions or rapid-fire multiple choice. Instead: (1) Lead with the question whose answer unlocks the most downstream context. (2) After each answer, state what you now understand and what implications it has before asking the next question. (3) If Hank defers a question, log it and move on without re-asking next turn.

I didn't write that. I described what was annoying me, and Jesse wrote a guideline that fixed it. That's the adoption model in miniature: use the system, notice what bothers you, say so, and the system reshapes itself around the way you actually work.

What Jesse Does for Me

Once I stopped fighting and started really using it, Jesse's value showed up quickly and in a lot of places. A few specific ones:

The single source of truth. A typical day generates action items from email, Slack threads, GitHub issues, calendar meetings and notes I capture throughout the day. Before Jesse I was doing what a lot of us in knowledge work do: running a half-remembered mental index of what's probably where, scanning four different apps to make sure I hadn't missed the one that mattered, and hoping nothing important was buried in a channel I hadn't checked since Tuesday. Jesse pulls from all of those sources at the start of each day and builds one list - not a perfect list, and not one I always agree with, but a single list in one place that I can review, reshuffle, and sign off on before the day starts. The mental index I was maintaining gets externalized into something I can actually look at.

The small things stopped slipping. Writing things down is the oldest productivity advice in the world, and I'm not going to pretend Jesse invented it; the difference is that once something is in Jesse's list it keeps showing up every morning until I either do it or consciously decide not to. The conference follow-up email sitting in my drafts for three days, the Slack thread I forgot to circle back on Monday - those don't quietly disappear into a note I'll never reopen. They're in my face at 9am the next day, and the day after that, and that daily confrontation is what makes the difference.

A true assistant for work that doesn't need my expertise. Every week I have a stand-up in which I have three minutes to present a summary of what I've been working on. Every week I log my client partner hours into our time tracking system. Before Jesse, both of those tasks meant sitting down and reconstructing the week from memory, notes and calendar entries, which was never hard exactly but always took longer than it should have. Now Jesse creates initial drafts of both from the work it already tracks - it was involved in most of it, so the raw material is already there. But it consistently over-includes: Jesse wants to list everything, and when I have three minutes in a stand-up I need to be the one deciding what matters. Same with the time log: Jesse drafts it, I adjust. The assembly is mechanical; the editorial judgment is mine.

This applies to larger research, too. When our CRM contract came up for renewal, I had Jesse do a full competitive analysis - features, pricing, the parts of our current tool that weren't working for us and whether the alternatives actually solved those problems. It did the research, cited its sources, and pulled it together into a cohesive report I could use to evaluate our options. My job was knowing how our team actually uses the CRM and evaluating whether the report's conclusions held up against that reality.

What's Next

A few weeks in, the asymmetry became hard to ignore. Work had this system and my personal life didn't, and personal life - the kid, the parents, the house - often carries the higher stakes. The things I was most afraid of dropping were the things that had no project management tool and no Slack channel to keep them visible. That story is next.

For now, the thing I keep coming back to is how much simpler it all became once I finally accepted what Jeremy kept telling me. Jesse doesn't match the mental model most of us have for what "adopting a tool" means. There is no onboarding to complete and no feature tour to sit through; there is just the work, and an assistant that gets better at helping with it every time I tell it what I actually need.

This post is part of Tag1’s AI Applied series, where we share how we're using AI inside our own work before bringing it to clients. Our goal is to be transparent about what works, what doesn’t, and what we are still figuring out, so that together, we can build a more practical, responsible path for AI adoption.

Bring practical, proven AI adoption strategies to your organization, let's start a conversation! We'd love to hear from you.

Work With Tag1

Be in Capable Digital Hands

Gain confidence and clarity with expert guidance that turns complex technical decisions into clear, informed choices—without the uncertainty.