Sign in

Africa Evaluation Blog: What are your assumptions?

ApolloNkwake.jpg

Apollo Nkwake

Posted 1 year ago

I have done evaluations for a while and I have learned that evaluators could do a better job in working with assumptions – assumptions of stakeholders and evaluators about programs and evaluations. Unexamined assumptions can be a huge risk to program success, and to helpful evaluations.

Assumptions suffuse almost all human affairs, and evaluation is no exception. Douglas E. Scates correctly and eloquently stated that: “Looking for assumptions in research is something like looking for shadows in the late afternoon. They’re everywhere. And once one starts noticing them he suddenly becomes aware that the world is full of them”.


A friend once told a story that was meant to be a joke, but it has become my favorite analogy. A little girl in rural Uganda was taught by her mother that it was wrong to eat food in the saucepan. Her mother scared her into obedience by repeatedly cautioning that if she ever did, her stomach would swell. One day as they queued in a clinic’s waiting room, the girl and her mother happened to sit next to a pregnant woman. The girl pointed at the pregnant woman’s bulging stomach, looked her in the eyes and told her: “I know what you did!” The pregnant woman was upset; the girl’s mother was embarrassed, and the girl was puzzled as to why adults in the room stared at her. The difference in assumptions about what causes stomachs to swell, and about what everyone in the room was thinking, was central to this dispute.

An even more critical issue is that these different assumptions were not explicit. The girl did not know what the woman and other adults in the room were thinking; the adults (except her mother) did not know what the girl was thinking. Everyone assumed!

Unstated and unexamined assumptions are a huge risk to development programs (the evaluand). Clarifying stakeholders’ assumptions about how the intervention should work to contribute to desired changes is important not only for the intervention’s success, but also for measuring its success. Lack of clarity on how change is expected to unfold makes it challenging to evaluate and learn from such programs.

Anecdotes that testify to the need to examine assumptions are not uncommon. For example, a program is established to provide housing to the poor. The eligibility requirements include a level of savings by the home purchaser (assuming that the targeted participants have income to set aside; or that a savings culture involving financial institutions exists). An evaluation then finds that that the program was able to find participants with savings to meet eligibility, but did not reach its targeted participants – the poor. This indicates unexamined assumptions, for example about why the poor had limited access to housing in the first place.

A typology for evaluand assumptions

I believe that the conversation on examining program assumptions needs to start with a typology that clarifies various categories of assumptions, and differentiate those that are worth examining from those that are not.

Assumptions are generally what we take for granted as true. In the logical framework approach to program design, assumptions are considered to be factors in the external environment of a program beyond stakeholders’ control that are preconditions for achieving expected outcomes. Most, if not all, evaluators and program designers have worked with this kind of assumption, commonly known as external or contextual assumptions.

But there is a range of other evaluand assumptions that have received little recognition in our practice, such as normative, diagnostic, prescriptive and causal assumptions.

Normative assumptions

These are the considerations, right from before an intervention is devised, that a problem (or opportunity) exists that deserves a response – that there is a discrepancy between the reality and that which is ideal.

Such concerns are rooted in stakeholder values. For example, a key capacity development program in Africa, AWARD, considers that the fact that only one in every seven leaders in Africa’s agricultural research and development is a woman is not only a form of inequity, but also an impediment to agricultural productivity. Male researchers are not likely to ask questions such as how long a variety takes to cook, or how much fuel, water and time it takes to cook, as their traditionally ascribed gender roles exclude food preparation.

Diagnostic assumptions

Diagnostic assumptions are stated as stakeholders’ perceptions of the major and minor causes of the core problems. Since the intervention to address a problem is based on the causes of that problem, diagnostic assumptions are crucial to a normative theory, and need to be examined from design, implementation and evaluation perspectives.

For example, a program in northern Zambia is assisting people living with HIV and AIDS who are marginalized due to illness, and unable to reintegrate socially and economically into society. Stakeholders believe that stigma is not only accentuated by community perceptions and attitudes, but by the inability of people living with HIV to express themselves as thriving members of the community. Guided by this assumption, interventions are devised to empower individuals (for example, counseling, apprenticeships, job placements, vocational training, small grants, or health education) and to address negative community perceptions (such as reconciliation with estranged families, and community education).

Prescriptive assumptions

Prescriptive assumptions have to do with the intervention or strategy devised to resolve the problem, or to reach a stated program goal. They represent stakeholders’ beliefs about what could be the most appropriate approach for addressing the problem, or responding to an opportunity. For example, AWARD empowers top women agricultural scientists across sub-Saharan Africa with the aim of strengthening their research and leadership skills, thus improving their potential to contribute to the prosperity and well-being of African smallholder farmers – most of whom are women.

The program asserts that individuals have to play a key role in their own empowerment. AWARD is therefore designed to expand the participants’ agency – among others through leadership and science training, and mentoring – to increase their self-awareness, confidence and motivation (their power within), their access to knowledge and networks, and expansion of their research and leadership skills (their power to do), their ability to participate in and lead collaborations (their power with), their ability to exert control over their personal and professional decisions, overcoming barriers to achieving their full potential, and career growth (their power over) and inspiring others to engage in gender responsive agriculture (their power to empower).

Causal Assumptions

Causal assumptions explain how initial and intermediate changes resulting from program implementation will bring about longer term changes. The difference between prescriptive and causal assumptions is that while prescriptive assumptions are related to strategies (and alternatives) devised to address a problem, causal assumptions relate to how the immediate results of a strategy program or intervention (outputs) are expected to lead to long-term desired changes (outcomes and impacts).

In the example of the AWARD program, a key causal assumption is that scientists who have participated in the fellowships and acquired leadership and science skills will go back to enabling an organizational environment that allows them to apply the skills learned. As implementation showed that this was not always the case, the program started to engage more proactively with fellows’ institutions.

Examining assumptions

Many program stakeholders might be uncomfortable with examining assumptions. For those who are mostly action oriented, doing something is less demanding and less strenuous than critically examining the ‘what’, ‘how’ and ‘why’ of actions.

Examining assumptions can thus be aided by a number of tools. Some tools are typically applied in program design, for example in theories of change, logical framework analysis, participatory impact pathways, causal loop diagrams and contribution analysis. Some tools are commonly applied in designing program theories – alternative causes’ matrix, alternative strategies matrix and integrative approach. Other tools have been applied during evaluation to reconstruct program theories, such as elicitation, strategic assumptions surfacing and policy scientific approach.

For a while now, my aspiration is to encourage and contribute to a conversation about how to work with assumptions in program evaluation. A starting point for this conversation is getting to a common understanding of the critical assumptions – what is worth examining and what is not. In the book Credibility, Validity, and Assumptions in Program Evaluation Methodology (2015), I propose a typology of evaluation assumptions structured according to a cycle of decision points in an evaluation process.

This book follows on the book Working with assumptions in international development program evaluation (2013), where I introduced some typologies of program assumptions, and tools for examining them.

I hope that these resources will encourage and support evaluators to examine our own assumptions about the programs we evaluate, and the methods and tools we use.


Write a comment

To comment you must be registered and logged in.

Comments

No comments posted yet.