Co-operation or freeloading: What is the effect of conditional versus unconditional incentives in an SMS survey?

Written by: Alexandra Cronberg

Introduction

Gifts can be a tricky business. While they may stem from pure generosity and care, they often come with sticky strings. Just ask all the companies that tightly regulate the receipt of gifts from, say, potential clients or partners. Such are human relationships that obligation and reciprocity often govern behaviour and interactions, for better or worse.

In survey research we may draw on the same deep-seated human traits of obligation and reciprocity to get respondents to complete our questionnaires. We can do this by giving an unconditional gift, i.e. incentive, in advance of asking for participation. Indeed, several studies[1] on postal surveys have shown that unconditional incentives do lead to higher response rates compared to giving a gift conditional upon completing the survey, which arguably treats the questionnaire more like a transactional exchange.

The use and administration of incentives is a particularly relevant issue for surveys making use of self-completion questionnaires, such as postal and SMS surveys: These data collection modes do not have the benefit of an interviewer who can coax respondents to take part and therefore need to rely on incentives to a greater extent.

Now, the same studies showing that unconditional incentives in postal surveys lead to higher response have also shown that unconditional incentives are actually not cost efficient. This can be due to undelivered letters or the absence of eligible respondents. Some respondents will also take the incentives, e.g. a voucher attached to an advance letter, without completing the questionnaire. Consequently, few postal surveys in practice administer incentives unconditionally.

With the increasing popularity of SMS surveys, it is pertinent to ask whether unconditional incentives have the same effect on SMS as on postal surveys, and whether it is cost efficient or not. In particular, SMS has the advantage over postal surveys that respondents can easily opt in, meaning cost efficiency may well be improved.

In order to seek the answer to these questions, Kantar Public carried out an experimental study together with the British Council. Read on to find out the results.

This study

The study involved an SMS survey with an experimental design to test the effect of administering conditional versus unconditional incentives. The study also sought to test the feasibility more broadly of using SMS as data collection mode to gather feedback and progress updates from British Council course participants, but that question is the topic for another blog post.

The survey was carried out among course participants in a British Council teacher training course in Ethiopia and the questionnaire comprised 16 questions. The sample consisted of 434 respondents with valid telephone numbers. Respondents were randomly allocated into one of two groups, Group A and Group B. The initial message was successfully delivered to 390 respondents (Group A: 199 resp. and Group B: 191 resp.). Each group was administered the survey as shown in the diagram below.

Group A & B

At the beginning of fieldwork, respondents were sent a message alerting them to the survey. A day later they were then sent another message asking them participate. In order to participate, respondents were instructed to first opt in by responding to the message. For Group A, the questions were then sent out followed by the incentive, provided the respondent completed all 16 questions. For Group B, the incentive was sent immediately after the respondent opted in, which was followed by the questions. The incentive consisted of airtime worth 15 Ethiopian Birr, equivalent to 0.55 US dollars.

Findings

The findings from the study suggest that offering the incentive in advance yields a slightly higher response rate compared to an incentive conditional on the respondent completing all the questions. As shown in the table below, among Group B, 25% completed all the questions whereas in Group A the equivalent figure was 21%.

These figures are broadly in line with surveys of this nature. That said, it is clear that response is still fairly low even among Group B.

How does this impact on cost efficiency? As mentioned above, one advantage of SMS surveys over postal ones is that respondents can easily opt in before any other message or incentive is sent to them. This means that unconditional incentives are only sent to respondents who have a valid telephone number and who are eligible, thus minimising loss. There is, however, still the potential issue of respondents taking the incentive without completing the questionnaire. This problem turned out to be quite a notable one in our SMS survey. Among respondents who opted in, nearly half of Group B (48%) did not complete the questionnaire. That means a large share of respondents took the incentive but ditched the questionnaire. The equivalent proportion who opted in but failed to answer all questions was somewhat higher for Group A (56%). Yet the resulting cost for the airtime incentives overall (and per completed interview) was lower for Group A since we did not allow for any freeloaders.

Putting monetary values to the incentives given to Group A and B, we can see that the total cost for Group B was ETB 15*93=ETB 1,395 (USD 59.30), equivalent to an average of ETB 29 per completed interview. This compares to a total cost of ETB 15 per completed interview among Group A, resulting in a total cost of ETB 15*42=ETB 630 (USD 26.77). Consequently, we might draw the conclusion that cost efficiency is a major concern also for SMS surveys when administering unconditional incentives.

Table

Conclusion

Based on the results from this experimental SMS study among teachers in Ethiopia, we can see that unconditional incentives yielded higher response compared to administering incentives conditional upon completion of the questionnaire. This finding is line with other studies, and re-affirms the view that drawing on respondents’ sense of obligation and reciprocity is more productive than treating survey participation as something of a transactional exchange.

That said, it is clear that a large share of respondents are not that bothered about reciprocity in the face of a free gift, even when first asked for their active participation. In this light, administering unconditional incentives in an SMS survey is arguably not cost efficient, with the average cost of unconditional incentives per completed interview nearly double that of the conditional alternative.

Hence, the sense of obligation and reciprocity may well be part of deep-seated human traits and behaviour, but it seems that in a context of technology and faceless interactions, many respondents will turn into freeloaders. Unfortunately for us social researchers, free airtime does not seem to come with sticky strings.

 

[1] See for example Simmons, E. and Wilmot, A. ‘Incentive payments on social surveys: a literature review’, published by the Office for National Statistics in the UK, 2004. See also Abdulaziz K, Brehaut J, Taljaard M, et al. ‘National survey of physicians to determine the effect of unconditional incentives on response rates of physician postal surveys’. BMJ Open 2015;5: 007166.doi:10.1136/bmjopen-2014-007166

A bit like finding a husband? Success factors for implementing segmentation analysis in social research studies

Written by: Alexandra Cronberg

Introduction

Segmentation analysis is gaining popularity in social research. While it has long been used in market research, this analytical approach can also add value in social research contexts. Specifically, it can help providing an understanding of different needs and motivations among sub-groups in a target population. Consequently it can help donors and agencies tailoring their programmes and interventions and thus increasing the likelihood of success.

There is much to be said about adopting segmentation as an analytical tool into social research. Yet it is also important to recognise the differences between social and market segmentations. This helps to both apply the tool appropriately and to set the right expectations early on. In this light, the blog post here will talk about the main differences between market and social segmentations, and what to bear in mind to ensure segmentation studies are successful in social research.

Now, you might wonder where that husband comes into the picture? Well, bear with me for a moment, but you can think of each segmentation solution as a potential partner. It will all become clear.

Hands holding seeds 2

Examples of segmentation studies

At Kantar Public we have conducted a number of segmentation studies over the last couple of years, including the following projects:

  • Segmentation of the adult population in India, which is one of the countries where open defecation is a major concern. The segmentation explored and helped gain an understanding of people’s toilet acquisition behaviour, drivers, and barriers. The segments identified were Regressives, Conservatives, Prospectives, and Progressives.
  • Farmer segmentation in Tanzania and Mali to understand which African farmers are open to new behaviours. The segments identified were Contented dependents, Competent optimists, Independents, Frustrated escapists, Traditionalists, Trapped.
  • Segmentation of young women and girls at risk of HIV in Kenya and South Africa. This study aimed at understanding risk factors that increase young women’s vulnerability to HIV infection based on behavioural, attitudinal, and demographic variables. The analysis led to segments such as teenage girls just starting to explore sex and relationships; young women in traditional marriages; girls with boyfriends who always use condoms (except when they don’t); and girls with steady boyfriends and sugar daddies on the side.

These projects give a flavour of what social segmentation solutions may look like. The studies have helped our clients to better target their interventions based on the specific needs and drivers of each segment, hence illustrating the value of applying segmentation analysis in a development context.

What is meant by ‘segmentation’ and what should it look like?

Before we move on to the differences and success factors, let’s agree on what is meant by ‘segmentation’. The word segmentation is sometimes used to simply denote splitting a population into sub-categories and presenting analysis by variables such as gender or age group. While this is indeed one type of segmentation, ‘segmentation analysis’ generally refers to sophisticated statistical techniques to segment people based on carefully designed questions and topic areas, and on patterns in the data that are unknown prior to the analysis. Segmentation can be based on a wide range of factors such as socio-demographics, beliefs, attitudes, behaviour, needs, and individual emotional traits. It is this type of segmentation we are concerned with here.

The aim of segmentation analysis is to have segments that are as distinct as possible from each other, while the people within each segment should be as similar as possible. The segments should also be easily identifiable in the population from a practical point of view. Furthermore, a successful segmentation should offer insights, some ‘ah ha!’ experience, and be intuitive enough to strike a chord with the client and stakeholders. If not, the segments are unlikely to gain traction.[1]

How do market and social segmentations differ?

Moving on to the differences between market and social segmentation studies, there are two main differences which I will talk about here.

Firstly, the outcome variables – that is, the factors on which the segmentation is based on – may be less clearly defined in social segmentations than in market ones. While market segmentations generally focus on segmenting the target population on the basis of a single outcome variable and a single behaviour – purchase of a product – social segmentation studies tend to be more complex than that. Social ones often (a) look at multifaceted and socially sensitive behaviours and (b) often try to explain multiple behaviours which each is affected by a different set of drivers and barriers.

As mentioned above, one of the benefits of using segmentation analysis in development is that programmes and interventions can be tailored according to the specific needs and behaviours of the target population. The key outcome variables for a programme may indeed be dependent on the findings from the segmentation analysis. This means that outcome variables may not actually be known or clearly defined at the beginning of a project.

In the context of young women at risk of HIV, there is a multitude of behaviours that lead to increased vulnerability. Risky behaviour may stem from lack of willingness to go out of one’s way to get a condom, lack of confidence to insist on condom use, or the keeping of multiple and/or concurrent boyfriends, to mention but a few. These behaviours, in turn, may be related to opportunities and socio-economic factors. There may also be physical barriers, such as inaccessibility to places providing free condoms, or lack of money to buy them. These factors can all feed into the segments, which subsequently reflect a variety of risk factors and population profiles. The intervention could focus on any one, or more, of these risk factors and drivers.

With complex segmentation studies such as the one of young women at risk of HIV, the analysis is often an iterative exercise where solutions are scrutinised and re-scrutinised as part of the process. In fact, you could say it is a bit like finding a partner or spouse with whom you want to settle down: you might need to meet a few potential partners before you even fully realise what it is you are seeking. Now, some researchers estimated the ideal number of partners to date before settling down is as high as 12![2]

Turning the attention back to segmentation, the multitude of outcome variables and the often complex associations between behaviours, attitudes, and needs further mean that segments produced in social segmentations are unlikely to be as neat as standard market segments.

As for your potential long-term partner, no segmentation solution is perfect. It is thus a matter of deciding what the most important traits are, and focusing on those. Although we may dream of extremely well-differentiated segments, each consisting of highly homogenous groups, we are unlikely to observe such a pattern for the full range of relevant variables. For example, among our young women, social norms and touch points turned out to be less differentiating than behaviour to protect oneself against HIV and also experience of abuse.

On this note, it is worth highlighting the importance of including a sufficient number of behavioural variables in the segmentation. While behavioural variables may not necessarily be more differentiating than attitudinal ones, they tend to have more practical value for identifying the target groups in the population at large. It is therefore important to ensure a sufficient range of relevant behavioural variables are covered.

Success factors

Having talked about segmentation analysis in broad terms, and the main differences between market and social segmentations, we can summarise the learnings for successful social segmentations as follows:

  1. Define as clearly as possible the element(s) (behaviours, attitudes etc.) on which you want the segments to vary, while acknowledging the complexities in social segmentations. Identifying the right segmentation variables is critical for successful segmentations. However, lack of a single outcome variables, and multifaceted relationships between behavioural, attitudinal and demographic variables mean segmentation analysis may involve an iterative process of finding the most suitable solution. It also means that segments may not be as clearly defined as standard market segments.
  2. Make sure the segments are easily identifiable in the population and, if necessary, tilt the balance towards behavioural factors. As for any segmentation, whether in market or social research, it is important that segments are identifiable in the population at large. How will the target groups be reached in practice? Behavioural variables tend to be more useful for this purpose, but this is dependent on the nature of the intervention.
  3. Allow time and resources to find the optimal segmentation solution. Two or three iterations are unlikely to be enough, so it is important to allow sufficient time for analysis. Finding the right segmentation solution is indeed a bit like finding a spouse. None is perfect, and it is only after meeting a few potential partners that one better knows what to settle for.
  4. Align expectations early on since the resulting segments are unlikely to be as neat as standard market segments. In light of the points above, it is important to acknowledge the differences between market and social segmentations, and the expected outputs. Have, and set, the right expectations from the start and segmentation solution will invariably become a smoother exercise.

Social segmentations have immense potential to add value and insight to programme designs, in particular to better understand the needs and drivers across different sub-groups in the target population. Bear in mind the points above, and you will maximise the chances of finding a set of segments that will succeed in making you happy. Perhaps not forever after, but at least until your next programme.

 

[1] I won’t go into the technical details of segmentation here, but it is worth noting that there are several different statistical methods of conducting segmentation analysis. One common analytical approach is Latent Class Analysis (LCA), which for example was used for the HIV related-project. The segmentation analysis is typically used to produces outputs for several different segmentation solutions such as solutions for 3, 4, 5, 6 and 7 segments. When deciding which solution to use, we normally look at the segments based on the segmenting variables and also by cross-tabulating the segments against other variables in the questionnaire. Pen portraits can then be produced of the different segments and to help decide which solution is the most useful ones.

[2] http://www.bbc.co.uk/programmes/p02hl73h

Functional Literacy: A Better Way of Assessing Reading Ability?

Written by: Alexandra Cronberg

When I lived in Nigeria, my driver, a young man in his 20s, told me had gone to school for six years. Yet he struggled to read and write. Once when taking me to the airport, he almost missed the turn for ‘Departures’. I realised he couldn’t read the sign. Other times he sent me text messages containing scrambled letters and words that I deciphered with a smile and a bit of sadness. I later learnt that he was going to school again to improve his literacy. The thing is, he was also a boxer who competed internationally. He said it was difficult for him to travel without being able to read. That ‘Departures’ sign was indeed important for his own life too.

Literacy is clearly key to getting on in life, whether you are well off and taking it for granted, or disadvantaged and struggling to read. Without the ability to read and write, you might miss out on opportunities to learn, adopt new practices, or indeed get by in everyday life. For organisations and governments working to improve the situation for poorer people in Africa and Asia in particular, it is essential to know what the level of literacy is and what the gaps are. As illustrated by my driver, the level of schooling is often not a good measure. Literacy needs to be measured specifically.

There are several ways in which this can be done. Literacy measures at population level normally involve a quantitative household survey[1]. The degree of usefulness and resource intensity of the measures varies, however. Data are usually collected face-to-face, though the more simplistic measures can be applied in other modes as well. Here I will briefly discuss the pros and cons of the main approaches, and also highlight the method of ‘functional literacy’ which has been developed and implemented by IBOPE Inteligência, associated with Kantar Public in Brazil, Instituto Paulo Montenegro, the social arm from IBOPE, and Ação Educativa, a non-governmental organisation focused on education in Brazil.

African children during English class, East Africa

African children from Samburu tribe during English language class under the acacia tree in remote village, Kenya, East Africa. Samburu tribe is one of the biggest tribes of north-central Kenya, and they are related to the Maasai.

In this blog post I will focus on ways of measuring reading ability, but similar approaches can be applied for writing ability and basic numeracy. Moving on, then, to the main approaches:

  1. Asking about reading ability directly. For example “How well can you read?” or “How well can you read a newspaper?” Response options may be “Very well”, “Somewhat well”, and “Not at all”.

Clearly this approach relies entirely on respondents’ subjective opinion of how well they can read, and may also be subject to social desirability bias. It may be influenced by reading ability among people around them, and their own rose-tinted self-perception. Perhaps a respondent can easily read her brother’s text message – better than anyone else in the household – but she might struggle to read more complicated texts. She would like to say she can read very well. What will she respond?

Having said that, there are times when self-perceived ability is what matters, for example where one wishes people to put themselves forward for adult education. Another advantage of this otherwise quite limited approach, is that it is a very short question that can fit into even SMS questionnaires. Moreover, the version of the question that simply asks how well respondents can read avoids the issue of defining the language. While this may be a drawback if more in-depth information is required, the question can serve to give a general sense of literacy level.

Asking specifically about newspaper reading means a reference point-of-sorts is introduced. However, it also raises the issue of language. What if most newspapers are published in, say, English rather than local languages? Which language should the question refer to?

Finally, it is worth mentioning that the literacy questions above are sometimes asked with respect to other people in the household rather than the respondent. This avoids potential social desirability bias, but it means links with other factors cannot be analysed so straight-forwardly.

  1. Asking the respondent to read a sentence out loud, eg ‘Parents love their children’ (from the Demographics and Health Survey, as referenced in the 2006 UNESCO paper).

This approach moves closer to assessing actual ability in an objective manner, rather than relying on self-reported answers. Responses are normally coded along the lines of ability to read ‘full sentence’, ‘partial sentence’ or ‘not at all’. While this approach is generally an improvement from self-reported measures, the sentence is usually a very simple one and provides a rather crude tool for assessment. Also, responses may not reflect actual comprehension. Few respondents succeed in reading only ‘part of the sentence’ – usually they can either read all of it or nothing, meaning it is not a very nuanced measure even for what it is trying to assess.

  1. Giving the respondent a brief text to read and then assess their comprehension.

Giving respondents a brief text to read and then asking questions to assess their comprehension provides a better assessment of literacy than just asking them to read a sentence out loud. The example below is taken from an Education Impact Evaluation survey in Ghana (2003), again as referenced in the UNESCO paper.

“John is a small boy. He lives in a village with his brothers and sisters. He goes to school every week. In his school there are five teachers. John is learning to read at school. He likes to read very much. His father is a teacher, and his parents want him to become a school teacher too.”

The respondent is then asked questions such as ‘Who is John?’, ‘Where does John live?’, ‘What does John do every week?’ etc. Often the responses are provided in multiple choice format.

Responses are grouped into categories based on the number of correct answers. This approach provides more reliable and nuanced results than the measures above, but it arguably doesn’t capture an adequate range of literacy levels reflecting how well people can function in the real world.

  1. Functional literacy: Giving the respondent a test to assess literacy based on a series of everyday-related activities.

This approach takes the literacy assessment a step further by incorporating a number of different tasks, reflecting everyday life in the context of a given society. It thus provides a much richer measure of literacy. It specifically measures ‘functional literacy’. The test has been developed in Brazil and covers things like reading a magazine, instruction manuals, and health related information. The test contains about 20 questions. For example, respondents are asked to look at a magazine and indicate where on the cover the title is located, or link the headings on the cover with the relevant articles. Other test questions relate to instructions on how to clean a water tank, information on who is eligible for vaccinations, and information on how to pay for a TV in installments. The level of difficulty increases as the test progresses. The responses are then coded using the method of Item Response Theory, meaning the increasing level of difficulty is taken into account in the weighting of responses. Respondents are categorised into one of four groups reflecting the level of functional literacy: 1) Illiterate, 2) Rudimentary, 3) Basic, and 4) Fully literate.

As mentioned above, this approach has been developed by our Kantar Public team in Brazil in partnership with Instituto Paulo Montenegro and Ação Educativa. It now provides official literacy statistics over time for the country. In principle, the assessment can be incorporated into any questionnaire and could be adopted for other countries. The downside, however, is that it can take a bit of time. While a person who can read well would only need about 15 minutes to complete the task, it often takes much longer for someone with lower level of literacy, not least because respondents often do not wish to give up. The other thing is that, as far as I am aware, it has so far only been developed for the Brazilian context. It would be extremely useful to adopt it to other languages and societies too, which indeed I hope we will get a chance to do.

On that note, I will end this blog post. Hopefully the continued measurement and development of global literacy indicators will help direct resources to improve people’s literacy among those who need it the most. The adoption of functional literacy in other countries would be a step in the right direction.

Hopefully better measures and improved literacy will contribute to a future where no one is held back because they struggle to locate the ‘Departures’ sign, and people like my Nigerian driver can take off in their boxing careers, or in any other ambition or aspiration they may have.

[1] For a comprehensive discussion of the first three approaches described in this blog post, see the UNESCO paper ‘Measuring literacy in developing country household surveys: issues and evidence’ (2006), available at: http://unesdoc.unesco.org/images/0014/001462/146285e.pdf.

From snoring camels to product diversification: A gendered analysis of internet participation in Ghana, Kenya, Nigeria and South Africa

Written by: Alexandra Cronberg

It is hard to find anything that offers so much hope and potential as increased internet access across Africa. The internet offers a whole new world of information, ideas, tools, and ways of connecting people as well as providing sources of entertainment and distractions, certainly with silly kittens and camels galore. Importantly, it offers revolutionising ways of accessing and delivering services, including vital ones such as finance. Recent discussions with jua kali, or informal sector producers in Kenya, showed enormous potential to diversify their product lines provided they had access to and knowledge of the internet. Enabling people at the bottom of the pyramid, who currently have little or limited internet access, to make use of all of this will be life changing.

Or so we like to think. In reality, the picture is more complex. While internet access itself may be binary, just like the data it holds, the users are intricate, inconsistent and often contradictory human beings. Indeed, internet participation cannot be reduced to zeros and ones. A paper by Kantar Public, presented at the African ITS Conference in Accra in March 2016, sheds light on the complexity of internet engagement and the factors that underpin it. The paper, authored by Nicola Marsh, is based on analysis of a global annual study of internet use conducted by Kantar TNS in a wide range of countries[1]. This particular piece of analysis focuses on Ghana, Kenya, Nigeria and South Africa.

Gender is a key part of this picture. Fewer women than men use the internet in most African countries, and these four countries are no exception. By way of example, 19% of men in Ghana have access to the internet, whereas the figure for women is a measly 9%. In South Africa, which has the highest level of internet access among the four countries, 41% of men use the internet whereas only 29% of women do so[2]. Consequently the door to the digital world remains shut for many women.

Figure 1. Internet access by country and sex, 2012

figure-2

Source: Research In Africa, 2012

The KP paper analysed different levels of internet engagement and factors that underpin different types of usage. First, an overall “internet participation” composite score was created based on a bunch of common online activities and their frequency. The findings show that greater access for women, or indeed disadvantaged men, does not imply online engagement. In fact, the countries with higher levels of access tend to have lower levels of participation. Within the countries, men consistently have higher levels of engagement than women.

Figure 2. Mean score of internet participation by country and sex, 2015

figure-2

Source: Kantar TNS Connected Life Survey, 2015

Second, this overall score was then broken down into three main factors or categories of usage, capturing some of the nuance of internet engagement. The categories are:

  • Popular activities. This includes instant messaging, social networking, uploading photos, playing games, reading news/sports/weather.
  • Sophisticated activities. This includes mobile payments[3], streaming/downloading shows/movies, streaming music/radio, watching videos, internet banking
  • Text heavy activities. This includes blogging, visiting blogs/forums, and emails.

The gender gap is further highlighted when looking at these different categories of internet usage, with sophisticated activities having the greatest gap.

Other factors in addition to gender that lead to greater internet participation overall are younger age, better education, and higher socio-economic group. However, different life stages, defined as student status, marital status and having kids, have no consistent impact on online participation across the four countries.

Lower education and social class have less of an impact on the popular online activities. If we want to get women and people who are less well educated to participate more, the starting point should arguably therefore be data light services.

These findings show that as online participation increases and people lower down the pyramid gain access, proportionately more people engage with the internet in lighter ways. Women are often among those who are late to join the online party. Indeed, across the four countries the gender gap for internet participation is inversely related to the level of internet access.  For example, in South Africa a more similar proportion of men and women access the internet, but among those who are online, women have a lower level of participation than men.  In contrast, in Ghana where the gender gap in access is large, the men and women who do have access have more similar levels of engagement.

In sum, this analysis makes it clear that for the internet to be a truly useful tool for disadvantaged groups of people, much more ought to be done to get women in particular to develop more technical skills and online literacy, as well as solving other affordability and access issues. If not, many of the most vulnerable people will remain excluded from the digital possibilities including access to services, information, networks and ideas. While a few tentative steps online might mean people tumble into Facebook and other social networks, it is essential they don’t just get sucked into the whirlpool of singing dogs, snoring camels and other people’s dinner from which they may or may not emerge. Rather, people need to engage with more sophisticated online activities if they are to click their way onwards and upwards. A snoring camel ain’t gonna help with that.

The full version of the paper is available on request.

[1] The analysis was based on the data from the annual, multi-country survey conducted by Kantar TNS, called “Connected Life”. The survey covers technology and internet behaviours amongst internet users. All those interviewed use the internet at least once a week, and the sample for each country is weighted to be nationally representative of weekly internet users aged 16+. The data was collected between June and August 2015.

[2] Source: Research In Africa, Gillwald et al (2012), http://www.researchictafrica.net

[3] Note that in Kenya mobile payments are commonly done using Mpesa, but the level of penetration of mobile money is much lower in other countries.

Focus group discussion or individual interview? The reality of quantitative interviewing in developing countries

Written by: Alexandra Cronberg

Do you ever find yourself trying to hold a conversation with someone in a noisy, busy environment? Perhaps it’s even in your house. Perhaps there are kids running around, teenagers watching TV, and your relatives have come to stay. It can get crowded. Then there’s a knock on the door. Indeed, someone else – an interviewer – has come to ask for a little of your time. You happily oblige, but there may not be a quiet corner for the interview, and others will inevitably over-hear what you are saying.

This is the reality for many of our respondents, and a common challenge faced by our enumerators. Our populations tend to have large families. Space is often scarce, with one-room houses being commonplace in urban areas. Households in rural areas might have more space, inside or outside, though this space seems to quickly fill up with curious onlookers.

The interview environment is thus not always ideal. This raises the questions: What proportion of interviews is indeed affected by noise and bystanders, and what is the impact of less than ideal interview settings? Does it matter? To what extent does it affect the quality of the data we collect? If so, what are the key concerns?

Our colleague at RTI, Charles Q. Lau, in collaboration with Melissa Baker, CEO of Kantar Public Africa & Middle East, conducted an analysis to answer these questions together with a few other co-authors. The article was published in the International Journal of Social Research Methodology (2016)[1]. Read on for a summary of the findings.

The Results

The findings are based on 15,309 face-to-face in-home interviews representative of the adult populations of five countries in Africa and Latin America (Ghana, Nigeria, Uganda, Brazil, and Guatemala), conducted in 2014 and 2015. The study answered the questions below.

How common are bystanders and noise in the interview context?

Well, it varies. Interviewers do their best to conduct interviews in a private place, out of hearing of others. However, the household context in these countries means this is often not possible. In terms of bystanders, ‘completely private’ interviews were conducted in only 64% of interviews in Brazil, 59% in Ghana, 54% in Guatemala, 53% in Uganda, and 33% in Nigeria. Bystanders are mostly non-family and extended family members, such as neighbours, domestic staff, but also children. In contrast, it appears most spouses have better things to do than listen in to their husband’s or wife’s survey responses.

Most interviews across all countries take place in a ‘quiet and calm’ setting. Even so, children, televisions, telephones and other distractions affect a few of the interviews: between 19% (Brazil) and 45% (Guatemala) were done in more or less noisy surroundings (either a bit of noise, or very noisy and chaotic).etaknrwhbcs-daniel-roizer

So the one million dollar question is: Do bystanders affect responses to questions?

The good news is that bystander presence has little effect on responses to non-sensitive questions. The analysis found there is little association between presence of onlookers and response distributions about technology-related questions, ‘don’t know’ responses, and survey satisficing (that is, the tendency to answer questions to minimise effort rather than respond in a truthful manner). So, in terms of non-sensitive topics we (and you!) can rest assured that standard interview settings in these countries do just fine for gathering good quality data.

Bear in mind however that this survey covered the topic of technology, which is by and large a non-sensitive topic. Other studies have shown that bystanders do have an effect on responses to sensitive questions, such as domestic violence and drug and alcohol use. For surveys asking sensitive questions, this study highlights the need to carefully consider the interviewing context, given how common it is that respondents are surrounded by bystanders and noise.

Could bystanders actually help to improve data quality for factual questions?

Well, yes, but only if the bystander is the husband or wife. However, most curious onlookers are neighbours, children, or extended family rather than the spouse. So the overall impact on data quality is negligible. Indeed, only 3-4% of interviews in Ghana, Nigeria, Uganda and Guatemala had the spouse present. In Brazil it was 11%. Having said that, among the few spouses present, some of them do chip in with factual information. This was especially the case in Nigeria, where almost half of spouse-bystanders assisted the respondent.

How does the interview environment affect data quality?

Perhaps unsurprisingly, noise has a negative impact on interviewer-respondent interactions. Noisier and more chaotic surroundings are generally associated with lower levels of respondent cooperation, attention and friendliness. However, in terms of the proportion of interviews in our study that were disrupted by chaos and noise, this figure was low: in Brazil, Ghana and Uganda only 2-5% of interviews were conducted in a very noisy and chaotic environment. The equivalent figures for Nigeria and Guatemala were a bit higher, ranging between 11 and 15%.

Having said that, again the good news is that noise and distractions had little effect on data quality itself. Indeed, interviewers seem to know how to cut through the noise! Key quality measures – level of ‘don’t knows’, satisficing, and response distributions – were not significantly associated with interviewing environment. We can therefore be confident that the data we collect is of high quality, indeed reflecting respondents’ attitudes and behaviour rather than the environment.

On that note, I will end this communication and say thank you for reading. That is, assuming you weren’t already distracted halfway through…

[1] Charles Q. Lau, Melissa Baker, Andrew Fiore, Diana Greene, Min Lieskovsky, Kim Matu & Emilia Peytcheva (2016): Bystanders, noise, and distractions in face-to-face surveys in Africa and Latin America, International Journal of Social Research Methodology.