Join the best CMA practitioners for live sessions every other week at The Outpost.
BLOG

How to Prove Your Customer Evidence Program Is Actually Working

TL;DR

Most customer marketing teams know they should be proving the impact of their customer evidence program. Very few actually do it. Emily Coleman did — and she did it without a data team, a custom integration, or a single IT request.

Here’s what we cover:

  • Why most customer evidence programs stall at “we trained the reps” — and how Emily pushed past it
  • The specific metrics she uncovered (including a 1.7x lift in proof point mentions on sales calls)
  • How she built a customer evidence attribution model using Highspot, Gong, and Salesforce
  • The LLM workflow she uses to ask better questions of her data
  • A six-step playbook any customer marketer can run this week

You don’t need a data team to build a customer evidence attribution model. You need a question worth answering and the tools you already have.


Attribution is the white whale of customer evidence programs. 

Everyone’s hunting it. Very few people catch it.

Well… who here is surprised that Emily Coleman managed to catch it?

(*Not a single hand raises*)

In case you live under a LinkedIn rock, Emily leads customer marketing and advocacy at LaunchDarkly.

Recently, she took to LinkedIn to share about a project she’d been working on for the better part of the past year: revamping their customer evidence library within Highspot. 

The details of the project (while great) weren’t what caught everyone’s attention, though. The real kicker was that she had managed to attribute actual revenue-centric numbers to this initiative

I saw the post while scrolling, and immediately shot her a DM. I had to know how she built this elusive customer evidence attribution model.

And, lucky for me, she was willing to share. Luckier for you… she was willing to put together the whole playbook and give it away for free in a session at The Outpost (our hub for customer marketers).

Here’s what she did, what she found, and how you can build your own attribution model.

The project that started it all

When Emily joined LaunchDarkly, the customer evidence situation looked like it does at most companies: things existed, but barely.

Reps were Googling their own proof points. Some were pulling from old decks. Some were only checking the case studies page on the website. No single source of truth. No confidence anyone was working with the best stuff available.

So Emily took on the massive project of overhauling their evidence library.

She consolidated everything into a dedicated spot in Highspot. Then, she audited what existed, cut what was stale, organized what remained so reps could actually find it. She built a landing page inside Highspot so reps knew where to go. Created vertical-specific pages so reps could filter by the industry their prospect cared about. Pulled in G2 reviews, Gartner grid reports, individual quote slides, one-slide case study summaries, UserEvidence assets, and YouTube testimonials.

Over 400 pieces of content (phew). Organized so that a rep never had to open a second tab to go looking.

She also did enablement. Sales bootcamp. Onboarding updates. Direct training on how to use the library and find what they needed.

It was a substantial lift. And it would have meant nothing without one very important thing…

She needed to prove it was working

This is where most customer evidence programs stall.

You build the thing. You train the reps. And then when someone asks what the ROI is, you shrug and say something about brand and trust and influence (and you desperately hope that nobody asks for hard numbers).

Emily wasn’t willing to stop there.

She knew that if she couldn’t connect the library to rep behavior — and eventually to revenue — the project would be a line item on a slide deck and nothing more. So she approached the problem like a data analyst, built her own attribution model from the tools she already had, and brought the findings to leadership.

Here’s what she found:

More visibility, more use. Library views jumped from 30% of users in 2024 to 71% in 2025. Once the resource existed and reps knew where to find it, they used it. (Turns out people use things they can find. Revolutionary.)

The library changed what happened on calls. Reps who viewed the library were 1.7x more likely to mention customer proof on a sales call. Not anecdotally. Measurably, verifiably — tracked through a smart tracker she set up in Gong.

The effect was even more pronounced at the enterprise level. Top consumers in their enterprise segment discussed proof points 61% more often than reps who weren’t engaging with the library.

Training moved the needle on asset diversity. Reps who attended training engaged with 2x more unique assets than those who hadn’t. Not just more content — more different content. Which matters, because Emily found that breadth of engagement was more predictive of rep behavior than volume.

That last finding was the one that surprised her leadership most. And it’s the one buying her the most ongoing investment: if training drives breadth, and breadth drives more proof point conversations on calls, the logical next step is to do more training.

That’s attribution doing exactly what it’s supposed to do. Not just measuring what happened, but actually directing what comes next.

How she built her customer evidence attribution model (and how you can too)

Emily was clear about one thing from the start: you do not need a data team to do this. You need a hypothesis, a few CSV exports, some AI tooling, and more patience than technical skill.

Start with a question, not a dashboard

Before she pulled a single data point, Emily did something most people skip: she talked to the reps. She interviewed regional sales directors and individual AEs and asked them what customer marketing could actually do for them. The consistent answer: they didn’t know what proof points existed.

That gave her the hypothesis she needed.

Her template: “Are people who do X more likely to do Y?” In her case: are reps who access the evidence library more likely to mention customer proof on calls? That’s a testable question. Everything else flows from it.

Use the tools you already have

Emily pulled from three sources: Highspot for content engagement data, Gong for call intelligence, and Salesforce for deal outcomes. No custom integrations. No IT requests. Just CSV exports and a willingness to join datasets together in a spreadsheet.

The join works by finding a shared key across datasets — usually a rep email address or account ID — and using XLOOKUP to pull related data into a single view. 

If that sounds as intimidating to you as it did to me, good news: any LLM can write that formula for you. Give it your column structure, tell it what you’re trying to connect, and it’ll give you exactly what you need.

Ask more interesting questions with an LLM

Spreadsheets will take you pretty far, but they won’t take you all the way.

When Emily wanted to know whether rep tenure was a confounding variable (maybe experienced reps just naturally mention more proof points, library or not) she uploaded her combined dataset to ChatGPT or Gemini and started asking questions in plain language.

“Does tenure predict evidence mentions independently of library engagement?” “What’s the statistical confidence in this finding?” “Are there outliers skewing the result?”

You don’t need to know what a Pearson correlation coefficient is to ask for one. You just need to know the question you’re trying to answer.

One important pitfall: context rot. After enough back-and-forth in a single chat window, LLMs start losing track of your dataset and return results that don’t hold up when you run them again. Emily’s fix is simple — start a new window for every new question, and re-upload your dataset at the top of each session.

Level up with code-based AI tooling

The biggest jump in Emily’s workflow came when she moved from browser-based LLMs to Claude Code running locally. The difference, in her words: “world’s different.”

Instead of holding your spreadsheet in the model’s memory, Claude Code lets you build an actual database — Emily used DuckDB — and run SQL queries against it. The results are faster, more reliable, and far easier to package into something you’d actually share with leadership.

She’s also built a vector database on top of it for semantic search across her full evidence library, so reps can eventually query it the way they’d query a search engine. That’s the future state. The starting point is much simpler.

Her advice: you don’t need to be a developer. Start by asking Claude Code to help you build something small and let it compound.

This is the part Emily was most deliberate about. She did not walk into her leadership meeting and say the evidence library drove X dollars in revenue. She said: here’s what we see when we compare reps who engage with it to reps who don’t — and here’s how that behavior is changing over time.

That framing is more honest and, counterintuitively, more credible. When you overclaim attribution, anyone with a skeptical bone in their body starts poking holes. When you show a clear directional trend and own the limitations of the data, people trust what you’re telling them.

How you can start this afternoon

You don’t need to build an all-inclusive database and get stuck in analysis paralysis to get started. You just need one question and two data sources.

Step 1. Name your question. Not “how do I show ROI” — something specific and testable. “Are reps who viewed our evidence library more likely to mention customer proof on calls?” Start there.

Step 2. Identify your data sources. You probably have more than you think. Sales enablement platform, call intelligence tool, CRM. Pick two that could answer your question.

Step 3. Find your shared key. Before you open a spreadsheet, identify the one field that appears in both datasets. Think: a rep email, an account ID, an opportunity ID. Write it down.

Step 4. Export and join. Pull CSVs from both sources. Use XLOOKUP (or ask an LLM to write the formula) to join them on your shared key.

Step 5. Ask one question of the joined data. Is there a difference between the two groups? How big? That’s your first finding.

Step 6. Report a trend, not a claim. Tell leadership what you observed. Own what you can’t yet prove. That honesty is what builds the credibility to keep going.

Attribution isn’t about perfection. It’s about trend-spotting.

You don’t have to wait around for the perfect dataset or the most comprehensive dashboard to start building your own repeatable attribution model for customer evidence. Emily didn’t. She started with a hypothesis, pulled some CSVs, and asked better questions of the data she already had. The rest followed.

That’s the move. Pick your question. Find your shared key. See what the data says. Then bring leadership a trend, not a promise.

Turns out, the white whale is more catchable than you think.

For more customer marketing and advocacy playbooks like this from your peers, check out The Outpost.

FAQs About Building a Customer Evidence Attribution Model

What is a customer evidence attribution model? A customer evidence attribution model is a framework for connecting your customer evidence program to measurable rep behavior and revenue outcomes. Rather than claiming direct causation, it tracks directional trends — like whether reps who engage with your evidence library are more likely to mention customer proof on sales calls — and uses those trends to justify ongoing investment in customer marketing.

Do I need a data team to build a customer evidence attribution model? No. Emily Coleman built hers using CSV exports from Highspot, Gong, and Salesforce — no custom integrations, no IT requests. The core methodology is XLOOKUP joins in a spreadsheet, plus LLM tooling to ask more sophisticated questions of the combined dataset. If you can name a testable hypothesis, you can get started.

What data sources do you need to measure customer evidence impact? You need at least two: one that tracks content engagement (like a sales enablement platform) and one that tracks rep behavior (like a call intelligence tool or CRM). The key is finding a shared field — a rep email or account ID — that lets you join the datasets and compare groups.

What types of customer proof should live in a Highspot instance? The more formats, the better — and the more specific, the more useful. A strong customer evidence library in Highspot should include case studies, one-slide summaries, individual quote slides, G2 reviews, video testimonials, ROI stats, and Gartner or analyst grid mentions. Organizing by vertical, use case, and persona makes the library actually usable rather than just comprehensive. Tools like UserEvidence integrate directly with Highspot, auto-syncing verified customer proof — pulled from surveys, G2, Gong call recordings, and more — with metadata already tagged so reps can filter and find what they need without opening a second tab. That’s the difference between a content library and a content library reps actually use.

What did LaunchDarkly’s customer evidence program actually find? After Emily rebuilt their customer evidence library in Highspot, library views jumped from 30% of users to 71% in a single year. Reps who engaged with the library were 1.7x more likely to mention customer proof on calls. And reps who received training on the library engaged with 2x more unique assets — which turned out to be more predictive of behavior than total volume.

What’s the difference between claiming attribution and reporting a trend? Claiming attribution means saying “our evidence program drove X dollars in revenue.” Reporting a trend means saying “here’s what we observe when we compare reps who engage with the library to those who don’t — and here’s how it’s changing over time.” The latter is more honest and, counterintuitively, more credible. It’s also harder to poke holes in.

How does customer marketing benefit from building an attribution model? Attribution models give customer marketing a seat at the revenue conversation. Instead of defending your program in terms of brand and influence, you can show leadership a directional trend, explain what’s driving it, and make the case for more investment — whether that’s more training, more assets, or more headcount. That’s the real payoff of doing this work.

Your 5 Step Process To Building A Competitive Evidence Library That Increases Win Rate

The End of the Case Study Era: Why GTM Teams Need Always-On Advocacy

New Guide & RFP Template: Why Advocacy Platforms Collapse After the Demo (And How to Evaluate Them Without Getting Burned)

New customer marketing playbooks every other week

Mosey on over to The Outpost, where the best CMA practitioners are sharing their in-the-(tumble)weeds plays and tactics.