2026 - Colabra
How I run GTM and sales with AI
The specific system behind my outbound, meeting prep, follow-ups, and re-engagement, all grounded in transcripts and run through custom AI skills.
Most founders I talk to use AI for sales the way they use a thesaurus. They write an email, ask the model to make it better, and send it. That's fine, but it's not leverage. It's cosmetic.
What I do is different. I treat every part of my sales motion as a system that can be encoded, grounded in real data, and run at volume. This post is the specific version of that.
Cold outreach: one skill, nine batches, one banlist
My cold outreach is run through a skill file that encodes everything I've learned about what works and what doesn't in M&A due diligence outreach. It includes:
- Persona classification (Corp Dev Deal Execution, Integration Lead, PE Deal Lead, Law Firm M&A Partner, each with different pain framings)
- A verified acquisition trigger for each prospect (something specific that happened in their company in the last 30 days)
- A strict writing rule set (no em dashes, under 100 words, cause-and-effect pain structure)
- A running ban list of phrases that stopped working
The ban list is the part I'd emphasize. Every batch of outreach surfaces a few phrases that get lower reply rates, and those go on the list. By batch nine, the list included things like "asks to see the source" as a pain framing for PE Deal Leads (didn't resonate), and the opener "Hi [Name], I'm Aoi" (redundant, the sender line already said it).
What makes this compounding is that each batch gets cheaper and better than the last one. The skill is a living document. The model writes emails that are already aligned to rules I've learned, so I'm not re-explaining them every time.
The batch of emails I send on a Tuesday is measurably better than the one I sent six weeks ago. Not because I'm a better writer. Because the skill is.
Meeting prep: the seven-step framework, per prospect
Before every discovery call, demo, or closing meeting, I have the model generate a prep document. The template is based on a framework from Jen Allen-Knuth and Armand Farrokh, adapted into my own skill.
The document includes:
- Research on the prospect's company (recent deals, leadership changes, tech stack mentions, public strategic priorities)
- Likely pain points based on their role and industry
- The specific problem I'm trying to get them to agree to in this meeting
- A talk track with opening, transition, and objection-handling language
- Three to five questions designed to surface their actual pain, not a pain I'm assuming they have
The model drafts the whole thing in about two minutes, grounded in real data pulled from LinkedIn, press releases, their website, and any prior Fireflies or Granola transcripts if I've met them before.
What this replaces is the thirty minutes I used to spend skimming a prospect's LinkedIn before a call, hoping I'd absorb enough to sound informed. The AI-generated prep is more thorough than my skim was and takes a fraction of the time. The thirty minutes I save goes into actual human preparation: practicing the hardest question out loud, deciding what I'll cut if the meeting runs long, and thinking about what I want the prospect to feel at the end of the call.
That's the right split. The model handles the research. I handle the judgment.
Follow-ups grounded in the actual transcript
Every sales follow-up I send is written against the actual transcript of the meeting it references. Not my memory. The tape.
I'll give you the rule from my playbook verbatim: "Always pull meeting transcripts first to verify claims and get exact language. Never invent quotes or details."
Here's why this matters. After a good call, my memory of what the prospect said is generous. I remember them sounding more enthusiastic than they did. I remember their objections as lighter. I remember my own answers as more persuasive. If I write a follow-up from memory, I'll end up sending something that references a conversation slightly different from the one we actually had. The prospect notices. It breaks trust.
When the follow-up is written against the transcript, the prospect reads it and feels heard. Their exact pain point comes back to them. Their exact vocabulary appears in my reply. The email says "you mentioned that your associates are spending the first week of a diligence just organizing documents," not "you mentioned your team spends a lot of time on admin." The specificity is the difference between a reply and a deleted message.
This one change alone probably doubled my follow-up reply rate.
The humanizer pass
Every sales email, every LinkedIn message, every blurb I draft with AI assistance gets run through a skill called the humanizer before I send it. The humanizer looks for twenty-four specific patterns that are AI tells.
Some examples of what it catches and rewrites:
- Em dashes, anywhere
- Sentences that use "not just X, but Y" (classic AI parallelism)
- Vocabulary words like delve, tapestry, pivotal, landscape
- Rule-of-three lists that don't earn their three elements
- Any sentence that opens with an inflated significance claim
If I skipped this pass, prospects would smell AI on my emails within the first line. Founder emails that sound like they came out of ChatGPT get ignored. Founder emails that sound like the founder wrote them get replies.
The investment is worth it. An email takes five extra seconds to lint. That's the cheapest quality upgrade in my entire stack.
Forwardable blurbs, tailored by persona
I keep three versions of a short Colabra blurb ready to send: one for PE, one for transaction advisors, one for corp dev. Each one leads with that persona's specific pain, not a generic pitch.
When someone offers to make an intro on my behalf, I don't send them a generic overview and ask them to customize it. I send them the blurb that matches the persona of the person they're introducing me to. Zero friction on their end. They forward it verbatim, and the recipient opens an email that already speaks their vocabulary.
The AI generates these blurbs from my positioning docs, but the real work is upstream. I had to decide what the three personas are, what each one's actual pain is, and what evidence each one finds credible. Once that's written down, the model produces blurbs that feel handcrafted because the inputs were.
Re-engagement of stalled deals
When a deal goes quiet, I run a re-engagement play. The model pulls every prior email and transcript with that prospect, identifies what specifically they said they needed to see, and drafts a new email that references that specific thing.
The rule I use is direct: skip small talk, skip congrats openers, lead with something new they can now do in the product, and offer a low-effort opt-out as a parenthetical. No "I know you're busy, but." No "just circling back."
This replaces the generic follow-up most founders send (the "checking in" email that reads like spam). Instead, a stalled prospect gets an email that says something like: "You mentioned you wanted to see how we handle employment agreements. We shipped that capability last month. Two-minute loom if useful, otherwise no pressure."
The conversion on those is meaningfully higher than generic follow-ups. The prospect can tell I actually remember what they said.
Why this works, in one paragraph
The reason all of this works is that none of it is AI pretending to be a salesperson. It's AI doing the parts that AI does well (research, drafting against rules, grounding in transcripts, running quality passes) and leaving me to do the parts humans have to do (judgment, taste, the actual relationship). The model doesn't close deals. I close deals. But the model makes every step before the close faster, more specific, and more grounded in what the prospect actually said, not what I remember them saying.
The founders I know who try to use AI to "automate sales" are usually disappointed. Automated sales outreach produces slop at volume, and buyers can smell it. What works is using AI to do the research-and-writing labor of a sales team, while the founder provides the judgment that makes each output specific to this deal, this prospect, this moment.
That's the setup. It took me about a year to build the skills. It would take someone else less time today, because more of the patterns are documented now. But the core principle is the same: if you want leverage from AI in sales, you have to be willing to write down what good looks like.
The model is the compiler. Your sales judgment is the source code. The output is only as good as what you put in.