← Back to Resources
The Geographer by Johannes Vermeer — a man at his desk with dividers, a globe, and a sea chart

The Geographer, Johannes Vermeer, 1669

PILOT HANDBOOK

"We'll Know It When We See It" Is Killing Your Pilots

Ask the buyer what success looks like before the pilot starts. If the answer is vague — "we'll know it when we see it," "we just want to kick the tires," "let's see how the team likes it" — you already know how this pilot ends. It ends in silence. A few follow-up emails that don't get returned. Eventually, a polite note saying they've "decided to go a different direction."

I didn't understand this for years. I thought the product was the thing that mattered. If the product was good enough, success would be obvious and the deal would close. But enterprise deals don't work that way. Without a defined finish line, your pilot can't "win." It can only continue, or fade. And fading is what almost all of them do.

Forrester's 2023 research found that enterprise pilots with predefined success criteria are 3.2 times more likely to convert to paid contracts than open-ended evaluations. That's not a marginal improvement. That's the difference between a pilot program that works and one that doesn't.

Why vague criteria feel safe but are actually fatal

I think the reason vague criteria are so common is that they feel like the polite option. Nobody wants to commit to a number before they've seen the product in action. The buyer doesn't want to be held to a standard they're not sure is realistic. And founders, honestly, are often afraid that defining clear criteria will give the customer a reason to say no.

So everyone agrees to something soft and moves on. "We'll evaluate user adoption." "We'll see if it integrates well." "We'll get feedback from the team." It feels collaborative. It feels low-pressure. And it's fatal.

Here's what happens when criteria are vague. Multiple stakeholders, each with their own agenda, develop their own private definition of success. The VP of engineering cares about integration complexity. The head of product cares about user engagement. The CFO cares about cost savings. Nobody has agreed on which of these matters most — so when the pilot ends, you're not presenting results against a standard. You're walking into a room where three people are scoring you on three different tests, and none of them told you what was on the exam.

Even worse, vague criteria get redefined after the fact. If the pilot went well, great — but someone who was never enthusiastic can always say "well, we didn't really see impact on the metric that matters to me." If the pilot didn't go well, the goalposts move. Without something written down before you start, you have no anchor. The outcome becomes a political question, not an empirical one.

The single-KPI approach

Mitch Morando, who's built a whole methodology around pilot execution, has a principle I think about constantly: the more goals you have, the less clear your impact will be. His recommendation is to pick a single KPI with measurable business impact. One number. That's it.

This sounds almost too simple, but I've come to think it's exactly right. The KPI should meet three criteria. First, it should tie directly to the business pain that started the conversation — the reason this prospect reached out in the first place. Second, it should be measurable within the pilot timeline. If the KPI requires six months of data to evaluate and your pilot is 60 days, you've picked the wrong metric. Third, and this is the one people miss, it should be something the buyer's boss cares about. Your champion might care about user experience. Their VP cares about revenue impact or cost reduction. Pick the one that matters at the level where the buying decision gets made.

Here's what good criteria look like versus bad ones:

Good: "Reduce average ticket resolution time from 48 hours to 24 hours." Bad: "Improve customer support efficiency." The first one has a number, a baseline, and a target. The second one is a wish.

Good: "Increase pipeline qualified leads by 15 percent during the pilot period." Bad: "Show us how your tool helps with lead gen." The first one is testable. The second one is a demo request disguised as a success criterion.

Good: "Process 500 invoices with less than 2 percent error rate." Bad: "We want to see if it works with our invoices." The first one you can objectively evaluate. The second one you can't.

I know a single KPI feels reductive. Enterprise problems are complex, and the temptation is to capture that complexity in your success criteria. Resist that temptation. You can track secondary metrics. You can note qualitative feedback. But the thing that determines whether this pilot succeeded or not should be one clear number that everyone agreed to before it started.

How to get agreement

This is where it gets practical, and honestly, a little uncomfortable. Getting a prospect to commit to specific criteria requires what I think of as a reset meeting — a conversation that happens before the pilot starts, ideally with every key stakeholder present.

Here's how I think about it. Come prepared with your standard criteria — the metrics you know your product can move, based on your experience with other customers. Dipam Shah makes a point that I think is underappreciated: your success criteria should be fairly consistent from one customer to the next, because those criteria are essentially your value proposition. If you're proposing wildly different metrics for every prospect, you either don't know what your product does or you're customizing to avoid accountability. Neither is good.

Present your proposed criteria. Let the customer push back on specifics. But don't leave the meeting without written agreement on what the number is, how you'll measure it, and when you'll evaluate it. Get it in email. Not because you're being legalistic, but because memory is unreliable and people in large organizations change roles, get busy, and forget what was agreed to.

GTMnow frames this well: treat the success criteria as a mutual contract. If the seller delivers on the criteria, the buyer agrees to award the technical win. I know "contract" sounds heavy, but the principle is right. Both sides should understand what a successful outcome means and what happens next when you achieve it.

Set a review date. Put it on the calendar during this same meeting. Don't leave it to "we'll find a time." Find the time now.

What to do when they resist

Sometimes a prospect won't commit to specific criteria. This is actually useful information, even though it doesn't feel that way in the moment.

"We're still figuring out what we need" usually means they're not ready for a pilot. That's not a rejection — it's a timing issue. A good next step is to help them clarify their needs, maybe through a discovery workshop or a smaller scoping conversation, and revisit the pilot when they know what they're trying to solve. Starting a pilot with a buyer who hasn't defined their own problem is a recipe for pilot purgatory.

Simon Barth, who led sales operations at LeanIX before the SAP acquisition, has a warning worth taking seriously: if the prospect is asking for goals which you believe are simply not achievable, your alarm bells should ring. Sometimes resistance to defining criteria isn't about uncertainty — it's about the buyer knowing the criteria won't be met and wanting to keep things vague so they can evaluate without committing. That's not a partnership. That's a free trial with extra steps.

I want to be careful here, because I don't think every reluctant prospect is acting in bad faith. Some genuinely don't know what good looks like, and part of your job is to help them figure it out. But there's a difference between a buyer who says "I'm not sure what metric to use — can you help me think through that?" and a buyer who says "let's just see how it goes." The first one is coachable. The second one is a red flag.

Why this matters more than you think

A defined KPI doesn't just help you win the pilot. It gives your champion the ammunition to sell the result internally. When the pilot ends and your champion walks into a room to make the case for a full contract, they need something concrete. "The team really liked it" doesn't survive a budget meeting. "We reduced resolution time by 40 percent against the target we set, and here's the data" does.

I've written about why charging for pilots matters — this is the other side of the same coin. If you charge for the pilot, you've established that the customer is serious. If you define the KPI, you've established what serious looks like. Together, these two things — money and a measurable outcome — are the foundation of every pilot that converts.

The boring, unglamorous work of getting a specific number written down before you start is, I think, the single highest-leverage thing you can do to improve your conversion rate. It's not exciting. It's not a clever strategy. It's just discipline. And it works.


Free download: The Pilot Handbook The founder's guide to running enterprise pilots that convert.
Get your copy →
Try Pilot Tracker Free

P.S. The painting is The Geographer by Johannes Vermeer, 1669. A man stands at his desk with a pair of dividers, a globe, and a sea chart. He's not exploring — he's measuring. He knows exactly what he's looking for before he starts looking. Vermeer painted this with the same obsessive precision his subject brings to the work. I chose it because the difference between a pilot that converts and one that doesn't usually comes down to whether someone did this kind of work before day one.