Beyond the Headline: What It Really Takes to Become Evidence-Based

I came across a headline recently that made my heart sink:

“Millions spent on homelessness programs — but most weren’t evidence-based.”

If you work in or around community programs, you’ve likely seen headlines like this before. And maybe, like me, you felt a mix of frustration and fatigue.

It’s an easy critique to make — but a deeply unfair one. Because while the phrase “evidence-based” sounds straightforward, the process to get there is anything but.

Becoming an evidence-based program isn’t about simply tracking data or writing stronger reports. It’s about building a body of rigorous, credible research over time — often through studies that take years, cost hundreds of thousands (sometimes millions) of dollars, and require resources well beyond what most community organizations have access to.

And yet, I’ve worked with many nonprofits who want to move in this direction — who want to design their work in a way that allows for future study, stronger learning, and credible results that can inform funders and the field.

So, how do you set yourself up for that path?

Let’s talk about some of the basics.

1. Choosing the Right Type of Study for Your Context

When we talk about “rigorous evaluation,” there’s no one-size-fits-all design. The goal is to match the method to your context — what’s feasible, ethical, and aligned with how your program actually operates.

Here’s a quick overview:

Randomized Controlled Trial (RCT)

  • What it is: Participants are randomly assigned to either receive the program (treatment group) or not (control group).

  • What it takes: Careful recruitment, clear eligibility criteria, and the ability to withhold or delay services ethically. Requires a large sample size and stable implementation over time.

  • When it’s the right fit: When you have high demand, standardized delivery, and enough participants to randomize fairly. Often used in later-stage testing once a program model is well-defined.

Quasi-Experimental Design

  • What it is: Compares outcomes between participants and a comparable group that didn’t receive services, but without random assignment.

  • What it takes: Access to reliable comparison data (often administrative or partner data) and statistical methods to adjust for differences between groups.

  • When it’s the right fit: When randomization isn’t feasible or ethical, but you can identify a similar group — such as people on a waitlist or from a different geographic region.

Pre-Post or Non-Experimental Design

  • What it is: Measures change in outcomes among participants over time, without a formal comparison group.

  • What it takes: Consistent data collection at multiple time points, a well-defined sample, and strong qualitative or contextual data to interpret findings.

  • When it’s the right fit: When you’re building early evidence of promise or piloting a new program model.

Mixed-Methods Evaluation

  • What it is: Combines quantitative and qualitative approaches to provide a more holistic understanding of impact.

  • What it takes: Careful integration — not just collecting both types of data, but using them together to explain results and context.

When it’s the right fit: Almost always. Especially when you want to understand why or how change happens, not just if it does.

2. Defining Your Treatment and Comparison Groups

Once you’ve selected your general study design, the next question becomes: Who are we comparing, and how do we make that fair?

A few practical options:

  • Random assignment: The gold standard when possible. Works best when demand exceeds capacity and all participants are eligible.

  • Waitlist design: Everyone eventually receives services, but one group starts later. Ethically strong and often feasible for social service programs.

  • Geographic or partner comparisons: Compare to a similar population served by another organization or in another region.

  • Historical or administrative comparisons: Use your own past data or publicly available data to show changes over time relative to broader trends.

The key is to ensure that your treatment and comparison groups are as similar as possible at baseline — and that differences are documented and controlled for in analysis.

3. Identifying Valid Outcomes to Measure

Before any data is collected, clarity about what success looks like is essential.

A few guiding questions:

  • What are the most meaningful changes your participants experience because of your program?

  • Which outcomes are realistic to measure within your timeframe and resources?

  • What existing data sources can you leverage (administrative data, assessments, surveys)?

  • Are your measures consistent and reliable across participants and time?

Start small and build from there. Even collecting consistent pre- and post-program data across cohorts is a meaningful first step toward stronger evidence.

4. Building in Time to Pause and Reflect

Becoming evidence-ready doesn’t happen overnight — it’s built through intentional cycles of testing, learning, and refining.

Schedule regular pauses to:

  • Review interim findings or emerging trends.

  • Discuss what’s surprising, what’s working, and what’s unclear.

  • Adjust your data collection tools or program model accordingly.

These reflection points don’t just improve the quality of your eventual evaluation — they also build the internal culture and habits that make learning sustainable.

Final Thoughts

When headlines critique programs for “not being evidence-based,” they often miss the most important point:
Many of these organizations are doing vital, effective work — they just haven’t yet had the opportunity or resources to prove it in the way our current system recognizes.

The path to becoming evidence-based isn’t about chasing validation. It’s about building clarity, consistency, and learning into your practice — so that when the opportunity arises to test and scale, you’re ready.

And for many organizations, that process starts now — with small, intentional steps that lay the groundwork for rigorous, meaningful evaluation later.

Every evidence journey looks different — but you don’t have to navigate it alone.

Book a discovery call to discuss how to set your organization on the right path: Discovery Call with Bridgepoint Evaluation

Next
Next

From Conversations to Insights: Using Interviews and Everyday Moments in Program Evaluation