Look, we get it. Your donor requires an independent evaluation. Your board wants evidence of impact. But nobody actually wants to hear that the $4.2 million nutrition program didn't move the needle. That's where we come in.
You already know what's wrong with your program. You don't need a $200,000 evaluation to tell you. What you need is a $200,000 evaluation to tell everyone else it's working.
By the time the evaluation is complete, the program has ended, the staff have moved on, and the implementing partner has already submitted a proposal for Phase II. The findings are technically "lessons learned" that no one will learn from.
Traditional evaluators have an unfortunate habit of "following the evidence" to conclusions that make everyone look bad. They call this "independence." We call it "a failure to understand the assignment."
Studies show that M&E findings are chronically underutilized. The analysis workshops get skipped. The reflection sessions don't happen. So why not skip straight to the part where everyone agrees the program was a success?
From mid-term reviews to final impact evaluations, we offer a full suite of evaluation services designed to produce the findings you need when you need them.
Our mid-term reviews confirm that the program is "on track" and provide strategic recommendations that align perfectly with what the project team was going to do anyway. Includes a Theory of Change diagram that arrows will definitely flow correctly through.
✓ ALWAYS ON TRACKOur flagship product. We deploy mixed-methods designs that combine quantitative rigor with qualitative nuance to demonstrate impact. Our proprietary counterfactual framework imagines a world so bleak without your program that any outcome looks transformative.
✓ IMPACT GUARANTEEDWe conduct RCTs with the methodological rigor of a Nobel Prize-winning economist and the flexibility of a yoga instructor. Our control groups are carefully selected from populations that were already doing poorly.
✓ GOLD STANDARD-ISHIs your logframe showing red indicators? Our specialists retroactively adjust targets, redefine output indicators, and revise assumptions until every cell is green. We also offer Theory of Change Beautification for programs whose causal pathways have become, shall we say, "non-linear."
✓ ALL GREEN INDICATORSWe compile comprehensive lessons learned documents featuring insights so generic they could apply to literally any program in any country. "Stakeholder engagement is critical." "Sustainability requires institutional buy-in." "Context matters." You're welcome.
✓ UNIVERSALLY APPLICABLEUSAID, DFID, GIZ, World Bank, UNDP â we speak everyone's M&E language fluently. We know when to say "results-based management" and when to say "adaptive programming." Same thing, different font.
✓ ALL DONORS ACCEPTEDOur evaluation process follows a rigorous, evidence-informed approach that has been refined across hundreds of engagements and zero peer reviews.
We meet with stakeholders to understand what the evaluation should find. Some call this "bias." We call it "co-creation with implementing partners." An inception report is produced that nobody will reference again.
Our enumerators collect data from carefully curated respondents â specifically, beneficiaries who are both available and grateful. Focus groups are conducted in locations where program visibility is highest.
We analyze data using our proprietary Positive Outcome Framework (POF)â¢. Inconvenient findings are reclassified as "areas for continued growth." Statistical insignificance becomes "emerging trends."
The final report features compelling data visualizations, strategic use of the color green, and an executive summary written at the exact reading level of someone who will skim it on a flight to the next donor meeting.
The OECD DAC evaluation criteria are the gold standard for assessing development interventions. We've simply removed the uncertainty from the process.
Absolutely. We cross-reference program activities with whatever the donor's current strategic priority happens to be. If the donor pivoted to climate since the program started, congratulations â your WASH program was climate-adaptive all along.
We locate at least three other initiatives doing vaguely similar things and describe your program as "complementary to the broader ecosystem of interventions." Synergy achieved.
Through careful retroactive revision of what the objectives actually were, yes. If the program trained 200 farmers but only 12 adopted new practices, we highlight the "catalytic potential" of those 12 change agents.
We calculate cost-per-beneficiary using the most generous possible definition of "beneficiary." If someone walked past a billboard about your program, they count. Our methodology aligns with what USAID would have done if they actually did cost-effectiveness analysis.
We construct a compelling counterfactual scenario in which, without your program, everything would have been significantly worse. Attribution challenges are addressed by simply not addressing them.
The benefits will last approximately until the funding cycle ends, but we describe this as "sustainability pending continued donor commitment." The exit strategy section of our report is a masterclass in creative writing.
Our clients don't come to us because they want surprises. They come because they want a 120-page PDF that confirms what they already put in the proposal.
Positive Evaluation took our struggling maternal health initiative and found 'statistically significant improvements in health-seeking behavior.' Our board was thrilled. The health outcomes data was, in their words, 'not the focus of this particular evaluation.'
We were nervous that the evaluation would reveal our agricultural program actually decreased yields. Instead, Positive Evaluation reframed this as 'diversification of livelihood strategies' and highlighted our impressive workshop attendance numbers.
The Theory of Change diagram they produced was so beautiful that our donor framed it. Literally framed it. Is the theory plausible? Who cares â it has arrows and it's in color. Phase III was approved unanimously.
All packages include a guaranteed positive finding. Because an evaluation that might go either way isn't really an investment â it's a gamble.
We prefer the term "evaluation with intentionality." Our approach aligns with the growing consensus that evaluations should be "utilization-focused." And what's more useful than an evaluation everyone's happy with?
Every program has impact â you just need the right indicators. Did the program exist? Then it contributed to "awareness." Did people attend? Then there was "capacity building." Did anyone take a photo? "Visibility and advocacy."
In our experience, no. Evaluation reports are primarily assessed by their weight, the quality of their formatting, and whether the executive summary contains the phrase "the program has demonstrated significant progress." Ours always does.
We use AI the same way every consulting firm does: extensively but quietly. Our AI models are specifically fine-tuned on thousands of evaluation reports, so the output is indistinguishable from what a junior consultant would produce after three Red Bulls and a tight deadline.
We take the DAC criteria very seriously. We just interpret them creatively. The criteria ask questions like "Is the intervention doing the right things?" â they don't specify what the right answer has to be.
No problem. We include 2-3 carefully calibrated critiques â always on operational details like "coordination meetings could have been more frequent" or "the M&E framework could be strengthened." Never anything that threatens the next funding cycle.
Join hundreds of implementing partners, UN agencies, and bilateral donors who have discovered that the best evaluation is one where you already know the outcome.
Request a ProposalSatisfaction guaranteed. Impact guaranteed. Statistical significance available upon request.