Why One Analysis Isn’t Enough: The Case for Running 20

The Made-For-One Difference · 8 min read · By Greg

Ask anyone to analyze a product and tell you who the ideal customer is, and they'll give you an answer. A smart person will give you a smart answer. An experienced person will give you an experienced answer. But here's what I've learned after 25 years of watching marketing decisions get made: one analysis, no matter how smart, is still one perspective. And one perspective is a dangerous thing to bet a launch on.

I'm not saying single analyses are worthless. I'm saying they're incomplete. And the difference between incomplete and complete is the difference between a campaign that works okay and a campaign that works exceptionally well.

Let me explain what I mean, and why I became obsessed with building a process that doesn't rely on a single point of view.

The problem with one analysis

When a strategist sits down to identify the ideal customer for a product, they bring a lens. Maybe it's a competitive lens: who are the competitors targeting, and where are the gaps? Maybe it's a formulation lens: what does the product actually do, and who benefits most? Maybe it's a behavioral lens: who's buying similar products, and what does their purchase journey look like?

All of these are valid lenses. Any one of them can produce a useful answer. But each lens has blind spots. The competitive lens tells you where the gaps are but might miss a customer segment that doesn't show up in competitor data. The formulation lens tells you who benefits most from the ingredients but might miss the emotional trigger that actually drives the purchase. The behavioral lens tells you who's buying now but might miss the larger, untapped segment that hasn't been reached yet.

When you run one analysis, you get one lens. The strategist picks the approach they're most comfortable with, applies it to the product, and produces a recommendation. It's thoughtful. It's professional. And it reflects one way of looking at the problem.

The issue isn't that the answer is wrong. It's that you have no way of knowing whether it's the best answer. You have nothing to compare it against. You have one perspective, and you're treating it as a conclusion.

This is how most customer research works in the agency world. One person does the thinking. One analysis gets produced. One recommendation gets made. And then a campaign gets built on it, with thousands of dollars behind it, and nobody asks: what would we have found if we'd looked at this from a different angle?

Why single-source decisions are risky

Think about how other high-stakes decisions get made. In a courtroom, you don't hear from one witness. In medicine, you don't rely on one test. In engineering, you don't run one simulation before building a bridge. In all of these fields, the principle is the same: when the stakes are high, you need multiple independent inputs before you can be confident in a conclusion.

Marketing budgets are high-stakes decisions. A supplement brand launching on the wrong target audience can burn through $50,000-$100,000 before anyone realizes the targeting was off. An agency that builds a three-month campaign strategy around the wrong customer profile wastes hundreds of hours and risks losing the client. These aren't theoretical costs. They happen every day.

And yet, the standard process for identifying the target customer is: one intake form, one competitive scan, one strategist's judgment. That's one input, filtered through one perspective, producing one recommendation. Then we spend real money on it and hope it holds.

I spent years watching this play out. Sometimes the single analysis was right. Sometimes it was close but missed the best angle. Sometimes it was flat wrong, and we didn't find out until the campaign data came back weeks later. The pattern was clear: the confidence of the recommendation rarely matched the rigor behind it. People spoke with certainty based on a single analysis, and sometimes the certainty held up and sometimes it didn't.

That inconsistency is what made me want to build something better.

What changes when you run 20

Here's what happens when you don't stop at one analysis. You run the same evaluation 20 times, and each time you use a different analytical lens.

One round might evaluate the sub-markets purely on competitive density: where is the white space? Another round might evaluate them on formulation fit: which customer's needs align most precisely with what this product actually does? Another might focus on emotional resonance: which customer has the deepest unmet emotional need that this product addresses? Another might look at purchase intent signals. Another at lifetime value potential. Another at price sensitivity. Another at referral likelihood.

Each round produces a winner. Sometimes the same sub-market wins across multiple rounds. Sometimes different rounds produce different winners. And that variation is where the real insight lives.

When one sub-market wins 16 out of 20 rounds, you have a definitive answer. Not an opinion. Not a recommendation based on one person's experience. A finding, supported by 20 independent evaluations that approached the question from 20 different angles. The confidence in that answer is fundamentally different from the confidence in a single analysis, because you've stress-tested it against every lens that matters.

When two sub-markets are close, splitting 9 and 11 across 20 rounds, that tells you something important too. It tells you the decision is genuinely tight, and the factors that differentiate them are specific and identifiable. You can look at which lenses favored one versus the other and make a decision based on the agency's specific strengths or the client's specific constraints. That's a much more useful finding than a single analysis that confidently picks one and never mentions the other was close.

Why this isn't something you can replicate with a prompt

I know what some people are thinking. "I can ask ChatGPT to analyze my product and tell me who the ideal customer is. Why would I need 20 rounds?"

Fair question. Here's the honest answer.

If you ask an AI to analyze a product and identify the ideal customer, it will give you a single analysis. It'll be articulate. It might even be insightful. But it's one pass, using one approach, with no cross-referencing, no simulation, and no statistical validation. It's the digital equivalent of asking one smart person for their opinion.

The problem isn't the intelligence of the analysis. It's the architecture. A single prompt produces a single output. If you ask again, you'll get a slightly different answer, which should tell you something about how reliable any single answer is. There's no mechanism to run 20 independent evaluations, force each one to use a different lens, prevent them from anchoring to each other's conclusions, and then aggregate the results statistically.

But even that undersells the difference. The 20-round simulation isn't 20 prompts. It's the final layer of a system that starts long before the simulation runs. Before a single round is evaluated, the product has already been through deep formulation analysis, ingredient-level research, competitive landscape mapping, niche identification and qualification, and sub-market segmentation with full demographic, psychographic, and behavioral profiling. Each of those stages is its own body of work, built on hundreds of hours of development, testing, and refinement. The simulation in Stage 3 isn't analyzing a product. It's analyzing a product that has already been researched across two full upstream stages, with all of that context feeding into every round.

That's why a prompt can't get you here. It's not that the AI isn't smart enough. It's that a prompt doesn't know what it doesn't know. It has no upstream research to draw from. It has no sub-market profiles to evaluate against. It has no competitive density data, no formulation-fit scoring, no behavioral segmentation. It's starting from zero every time. The 20-round simulation is starting from a foundation that took three stages to build. The difference in output isn't incremental. It's structural.

Convergence tells you where the evidence is strong. Divergence tells you where the uncertainty is. Both are valuable. Neither shows up in a single analysis, and neither is possible without the upstream work that makes each round meaningful.

What convergence actually looks like

Let me give you a real example of what this process reveals. Without getting into client specifics, here's the kind of pattern that emerges.

A supplement brand has three potential target sub-markets. On the surface, all three look viable. A single analysis might pick whichever one the analyst feels strongest about, or whichever one has the most obvious competitive gap, and move on.

When you run 20 rounds, a different picture emerges. Sub-market A wins 16 out of 20 rounds. Sub-market B wins 4. Sub-market C wins zero.

But the numbers alone aren't the insight. The insight is in the pattern. Sub-market A doesn't just win more rounds. Its floor score, the lowest score it receives in any round, is higher than Sub-market B's average score. That means even in the rounds where Sub-market A performed worst, it was still performing at the level Sub-market B performs on an average day. That's not a slight edge. That's a structural advantage.

Sub-market C winning zero rounds tells you something definitive: this is not a viable target for this product at this time. A single analysis might have ranked it second or third and kept it alive as an option. Twenty rounds killed it with certainty. That clarity saves the agency from testing audiences that were never going to work.

And the specific rounds where Sub-market B outperformed Sub-market A? Those tell you exactly which analytical lenses favor B. Maybe B scores higher on price sensitivity and competitive whitespace. That's useful intelligence for Phase 2, when the brand has established itself with the A audience and is ready to expand. The 20-round process doesn't just find the winner. It maps the terrain.

Confidence matters as much as the answer

One of the things I built into the process is a confidence classification. Not every winner is created equal, and the agency needs to know the difference.

When a sub-market wins 80% or more of the rounds, that's a definitive winner. Build the entire brand strategy around this customer with confidence. When a sub-market wins 60-79% of the rounds, that's a strong favorite. Lead with this customer but keep the runner-up in view for future expansion. When it's 50-59%, that's a slight edge. Both options are viable. The deciding factor should come from the agency's knowledge of the client's specific constraints. And below 50% means it's too close to call: both sub-markets are equally viable, and the decision should be based on secondary factors.

This classification does something that a single analysis can never do: it tells you how much to trust the answer. A definitive winner gets different treatment than a slight edge. Different investment, different risk tolerance, different strategic planning. Without the classification, every recommendation sounds equally confident. With it, you know exactly where you stand.

What this means for the agency

If you run an agency and you're evaluating how to improve your strategic offering, here's what this comes down to.

The current standard for identifying a target customer is one analysis, produced by one person or one tool, based on one analytical approach. That standard produces useful but incomplete answers. Sometimes those answers are right. Sometimes they miss the best opportunity. And you never know which one you got until you've spent money finding out.

A multi-round approach doesn't guarantee perfection. Nothing does. But it dramatically reduces the odds of missing the best target. It eliminates weak options with certainty. It identifies the winner with statistical confidence instead of individual judgment. And it produces a finding you can defend, not just to the client, but to yourself.

When a client asks "How did you determine this was the right target audience?" the answer isn't "I looked at the competitive landscape and used my experience." The answer is "We ran 20 independent evaluations, each using a different analytical lens. This customer segment won 16 out of 20. Here's what the data showed."

That's a different conversation. It's a conversation that builds trust, justifies pricing, and positions the agency as something more than an execution shop. It's the kind of answer that brand leaders, the people writing the checks, have been waiting to hear from their agencies.

The difference between an opinion and a finding

I've spent my whole career in environments where decisions are backed by evidence. The marketing teams I led didn't accept "I think this is the right target" as sufficient. We wanted to know how they knew. We wanted to see the work.

The 20-round approach exists because I wanted customer research to feel like a finding, not an opinion. An opinion is what one person thinks after looking at the data. A finding is what emerges when 20 independent evaluations converge on the same answer.

Opinions can be right. They often are. But they carry the weight of one perspective. Findings carry the weight of evidence. And in a business where you're about to spend someone else's money on the back of that answer, evidence isn't a luxury. It's what separates the agencies that guess from the agencies that know.

Greg is the founder of Made-For-One Brands. He spent 25 years leading brands at Compaq, 5.11 Tactical, and Nutrabolt before building a research methodology that identifies the single best customer for any supplement or skincare product. He works exclusively with agencies.

See the methodology behind the thinking.

Book a 15-minute call. See a real deliverable. No pitch, no pressure.

Book a 15-Minute Call