The Secret to Stronger Evaluations? Start with the Right Questions.

If you’ve ever read a grant report and thought, “This doesn’t actually tell the story of what we do,” you’re not alone. Nonprofit teams do a lot. We run programs, adjust to shifting needs, track what we can, and try to make meaning of it all at the end of the year. And yet, despite all that activity, our evaluation efforts often feel… unsatisfying. Like they’re missing the heart of the work.

Why?

It’s not for lack of effort. It’s because we tend to jump straight into data collection—before we’ve clearly defined what we want to learn.

Let’s reframe that.

Evaluation Starts with Curiosity, Not a Spreadsheet

Think about your last evaluation effort. Maybe you were asked to submit outcomes for a grant, or your board wanted metrics for a strategic plan. Did you start by pulling numbers from a database? Maybe recycling questions from past surveys?

That’s common—but it skips a critical step.

Before you build the survey or pull the report, you need to get grounded in your evaluation questions: the 2–4 big-picture inquiries that guide what data you collect, how you interpret it, and what it helps you do.

Without these questions, data tends to pile up in disconnected ways. You might have lots of numbers—but very little insight.

The Power of a Good Evaluation Question

Strong evaluation questions are not trivia questions. They’re strategic. They help you:

  • Understand how your work is actually showing up in people’s lives

  • Adjust programming in real time

  • Advocate for continued or expanded funding

  • Build alignment across your team

And yet, they’re often overlooked because the language of evaluation can feel intimidating or overly technical.

So let’s simplify it. You don’t need a logic model to start asking good questions (though those are useful too). What you need is a structured way to tap into your team’s natural curiosity and lived experience.

Here’s a simple method I often use with clients to uncover strong evaluation questions—whether you’re just getting started or trying to refresh a tired system.

A Practical Process to Surface What Matters

Step 1: Ask Real People Real Questions
Instead of jumping into a planning document, start with conversation. Ask your staff, your board, your partners—even your participants if appropriate:

  • What do we need to know about our work?

  • What do we wish we knew?

  • What are we always being asked to prove?

These questions reveal where the energy and tension already exist.

Step 2: Look for Themes
Group what you hear into categories. Some might be about effectiveness. Others might focus on equity, access, or impact over time.

This clustering helps you see the forest for the trees—and avoids the trap of trying to track everything.

Step 3: Choose a Few Questions that Matter Most
The sweet spot is 2–4 guiding questions. Not 12. Not 20. You want your evaluation strategy to be focused enough to be useful—and flexible enough to grow with you.

Strong examples include:

  • How do our programs support long-term wellbeing for participants?

  • In what ways are we reaching the communities we intend to serve?

  • What aspects of our model are most effective—and for whom?

Once you’ve got these questions, every survey, interview, and data pull has a purpose. You’re not collecting “just in case”—you’re collecting on purpose.

But Wait—What Type of Question Is It?

Most evaluation questions fall into one (or more) of these categories:

  • Process – How is the work being delivered?

  • Outcome – What’s changing as a result?

  • Effectiveness – Are we meeting our goals?

  • Appropriateness – Is this a good fit for our community?

  • Cost-Benefit – Are we seeing enough return on our investment?

  • Efficiency – Are we using resources wisely?

Knowing the type helps you match your question to the right data method. (You wouldn’t use a single post-event survey to answer a five-year impact question.)

From Insight to Action

You don’t need to overhaul your entire evaluation approach to make this shift. Just pause before you launch your next data collection effort and ask:

“What are we trying to learn—and why does it matter?”

Then let that shape your next move.

Evaluation isn’t just about proving your worth to funders. It’s a tool for learning, reflection, and strategy. It helps you tell a fuller, truer story—one that includes both the outcomes you can measure and the deeper impact you’re striving for.

And if you’d like a partner in building that kind of intentional, energizing evaluation practice? That’s what I do at Bridgepoint Evaluation.

Let’s build an approach that actually works for your team.

Next
Next

From “Ope!” to Organized: Building a Better Evaluation Strategy