No time for reading? The real reason why book reading is declining in South Africa

Written by: Alexandra Cronberg

Put on Sabina’s hat. Wait for the train in sweltering heat with Ifemelu. Sit down with Nathan Zuckerman and talk about the inevitability of getting people wrong. Or enter the world of the characters in any other novel. Whatever the book, the benefits of reading are numerous. It helps builds empathy, imagination, and critical thinking. These traits not only enrich personal lives but can contribute to social cohesion and innovation. Reading has also been shown, time and again, to be a strong predictor of educational attainment and academic success. Hence it can help to reduce social and economic inequalities in a country.

Book imageAgainst this backdrop, the South African Book Development Council (SABDC) is leading an initiative to promote reading, in particular among disadvantaged groups. As part of this work, they commissioned Kantar Public (at the time operating as part of TNS) to gather information on South Africans’ reading habits, and to segment the population based on those habits and willingness to read books[1]. The study applied tools developed for market research purposes, and shows how such tools can gainfully be used in the context of social research. Specifically, two market research tools were used for this study: First, a market segmentation was done to segment the population with respect to reading habits and inclination to read books. This segmentation helped answer such questions as ‘Who are the people loath to ever open the covers of a book, and how common are they in the population?’ Or, ‘Who are the people with books on their bedside table, which are gathering more dust than delight?’[2] Second, the ConversionModel was used to estimate how much free time the different segments dedicate to reading, and how much time they would ideally spend if there were no barriers to getting lost in a book. What’s the discrepancy, if any?

The study showed there is much that competes for the time and attention of South Africans. Listening to the radio, watching TV/movies, going to the mall, and hanging out with family or friends, are all more popular activities than reading. With respect to printed books, only four-in-ten households have a book in the house. South African readers spend, on average, four hours per week reading, though not necessarily books. Compared to a previous survey, the study showed that reading has dropped in terms of popularity as leisure activity: In 2006, 65% of South Africans reported having read in the past month. That figure was down to 43% a decade later.

Turning to the results of the segmentation, the study showed that almost three-quarters (73%) of the population are ‘low potential’ printed book readers, that is, this segment not only prefers to watch TV, listen to the radio, or go to the mall, but would probably prefer dusting the shelves too rather than reading. The other segments are ‘committed readers’ (14%), ‘less committed readers’ (10%), and those who are ‘open’ (3%).

The value of this analysis lies in the tailored strategies that the SABDC can develop for each of the segments, and the targeted level of effort involved. Some people may only need a bit of encouragement to pick up that book waiting on the bedside table, whereas others need to find new occasions to take up reading. For yet another group, readership starts from a blank page, so to say.

Moreover, the results from the ConversionModel showed that in South Africa there is generally a pretty small discrepancy between the actual time spent reading and the ideal amount of time dedicated to this pastime. Hence, the falling readership figures are probably not due to increasingly busy lives, but because activities and preferences have shifted. It might not have been what the SABDC wanted to hear, but nevertheless helps them inform their strategies and initiatives.

So, the SABDC is bound to stay busy for a while, working to get South Africans to pick up those books waiting to be tickled with the turn of a page. More than that, much effort is needed to get people to visit the library or bookstore in the first place. Yet the right Quoteinformation to aid the design of their programmes and initiatives makes their task easier: The study findings mean they can specifically target the groups with the greatest potential. As a result, there may be more people who will put on Sabina’s hat, wait for the train with Ifemelu, or sit down in a bistro with Nathan Zuckerman, but more importantly, step into any story or book.


[1] National survey into the reading and book reading behaviour of adult South Africans 2016. The report is available at: The study was a nationally representative household survey (n=4000).

[2] The questions may have been were worded slightly differently in the project report.

What works? Reflections on the International Summit on Social and Behaviour Change Communication

Written by: Alexandra Cronberg

Whether you know it or not, you are probably subject to social and behaviour change interventions in daily life. If you live in South Africa, perhaps you have heard radio ads or seen ads on Twitter or Facebook, talking about the importance of wearing seatbelts in the car. If you live in India, perhaps you have seen billboards with the slogan “Drink whisky, drive risky”. In many countries in Europe, cigarette packages now display graphic pictures of cancerous organs, and smoking is banned in public places. In countries all over the world, employees who are given the choice of joining a pension scheme often have the box ticked by default.

As these examples illustrate, behaviour change interventions can take many forms, including communication campaigns, taxes, legislation, and ‘nudges’.

These types of interventions are all part of the field of Social and Behaviour Change (SBC). Sometimes it’s referred to, more or less interchangeably, as Social and Behaviour Change Communication (SBCC) or Communication for Development (C4D). Whichever term is used, social and behaviour change is an umbrella field consisting of specialists in subjects such as communications, behavioural economics, anthropology, sociology, and Human Centred Design, to mention but a few.

The evolving nature and growing popularity of the field was evident at the International Summit on Social and Behaviour Change Communication ( held in Bali a couple of weeks ago. Over 1200 participants attended, indeed three times as many as that at the preceding summit in 2016.

The summit posed the pertinent question: What works?

To address it, the conference agenda offered a vast range of session on interventions, approaches, and measurements. The interventions targeted everything from use of family planning by reaching mothers-in-law, radio dramas addressing gender-based violence through challenging prevailing social norms, and television ads to normalise HIV testing among gay men. Presenters also talked more broadly about measuring and understanding social norms and networks, successfully scaling up interventions, strengthening measurements, as well as new innovative research methods.

It certainly made for interesting contents on What Works? The question of Why things work was, however, more scantily answered. Admittedly, that question was not part of the summit title. Yet it is also an important one if we want to exhibit a degree of predictability in these matters. While the complexity of human behaviour, and often non-linear nature of change, means there is no simple answer or single model for Pictureaddressing behaviour change, the very complexity of the matter means it is essential to use – and gain – insights into conscious and unconscious drivers of behaviour. Only with such insights can the design and effectiveness of this type of interventions be maximised and further advanced.

So what should be next for SBC? Arguably the challenge is how practitioners in the field – the behavioural scientists, economists, sociologists, communication experts – share knowledge and collaborate to answer not only the question of What Works, but also Why?

Offering a forum to discuss such gaps and potential future actions, the SBCC summit included a small but valuable working session on ‘What does the research agenda for social and behaviour change need to address?’. Attended by a dozen or so academics, researchers, and practitioners, the session promised a good start to bringing these groups together and enabling the sharing of knowledge and building of a joint research agenda. The conversation is currently continuing online, with actions to follow. It is also worth mentioning that other points raised were the need for a better understanding and/or sharing of innovations, ethics of interventions, sustainability, cost analysis, and a conceptual practical model of influencers.

Hopefully this collaborative initiative can help share knowledge building on existing insights to improve effectiveness and predictability of social and behaviour change interventions, and so contribute to further develop the field.

What’s more, hopefully the initiative will succeed without the need for any behaviour change intervention of its own.

Mobile technology: The future of evidence in development?

Written by: Alexandra Cronberg

The future is all about mobile technology, right? Well, perhaps, but in the context of real programme evaluations, it is worth examining and understanding the benefits and drawbacks of mobile data collection modes for gathering evidence, before waving goodbye to human interviewers.IMG-20180321-WA0002

In order to address this topic, Kantar Public and the British Council hosted a joint event which took place in London, covering three studies including one in partnership with RTI. The full slideset with the findings is available here. Read on for a brief summary.

  1. How efficient is mobile SMS vs other methods of collecting evidence from teachers in the Connecting Classroom programme?

In order to answer this question, we carried out a pilot survey among participants who had attended the British Council’s Connecting Classroom training in Ethiopia. The study collected progress data using an SMS survey, rather than using the alternatives of paper or telephone. This study showed that SMS has many benefits and some challenges: It is a cost-efficient and viable option for collecting progress data among a target population among known participants, although the response rate is lower compared to telephone and paper. Moreover, there were some unexpected challenges during the pilot, including internet downtime and a change in the MNO’s airtime bundles, which affected the administration of incentives. This highlighted the importance of allowing plenty of time for testing and piloting the survey.

The second study addressed the following question:

  1. What is the potential for using Interactive Voice Response (IVR), compared with SMS, telephone surveys (CATI) and face-to-face surveys of collecting information in the general population?

This study, which Kantar Public conducted in partnership with RTI, compared response rates and representativeness of mobile data collection modes (i.e. SMS, CATI, and IVR) with that of face-to-face interviews. All of these studies targeted the adult general population in Nigeria aged 18 to 64. The results showed that the response rate of SMS and IVR are very, very low, and even for CATI it is much lower than for face-to-face. This would not be such a problem if the achieved samples were representative of the general population. That is, unfortunately, not nearly the case. The study showed that the achieved samples using SMS and IVR are very much skewed towards better educated and younger people, and also towards men. People often think that applying statistical weights to improve sample representativeness is the solution to this problem. However, the findings showed that weighting does not solve the problem, and least when looking at voting behaviour. In fact, weighing actually increased the bias. Finally, with respect to cost, once we adjusted for questionnaire length and sample size, SMS and IVR are more expensive than CATI on a question-by-question basis.

The third study addressed the this question:

  1. At the classroom learning outcome level, what is the role that mobile play? Can mobile improve the immediacy of outcome data collection?

This part of the presentation related to a pilot study that the British Council carried out to test how new technologies – in this case a mobile phone app – can be used to the assess core skills at a classroom level. This app enabled assessment at the “point of learning” by teachers, peers, and by students themselves through self-reflection. It proved to be a useful, easy-to-use tool with scope for further roll-out.

In sum, these studies showed that SMS and IVR have some potential for use in survey data collection, but that representativeness is a serious concern when these data collection modes are targeting the general population. Perhaps the future isn’t quite yet what we think it is.


Global Thinking and Local Scones: Experience of Doing an MA in Development Studies

Written by: Alexandra Cronberg

What is it like doing an MA in Development Studies? More specifically, what is it like doing an MA in Development Studies when you are born and bred Nigerian, and have already spent fifteen years working with organisations such as the World Bank, USAID, and Lagos State Government?

Mariam Fagbemi, Head of Kantar Public Nigeria and recently graduated from her MA at Sussex University, tells us of her experience.

“I’ve been working in research-based consulting and evaluation for many years, so I was really excited to get the opportunity to do a MA in Development Studies at the Institute of Graduating classDevelopment Studies, Sussex University.  What we do as Kantar Public is very hands-on giving us a deep understanding of the “field”, whereas some of our clients have a PhD and quite a theoretical approach, so that can cause a bit of a clash!  While I realized I already knew a lot practical terms of what was being taught, this helped me put it all into theoretical frameworks. In that sense I feel I have now got both sides – the practical knowledge from my experience, as well as the theoretical approach. The MA certainly did meet my expectations in what I’d learn.

What was it like? Well, aside from the cold weather and expensive costs, it was a great experience. It was pretty intense, though. There was a lot to take in and process during the course, and I also had to keep an eye on the work going on back in Nigeria. And that’s not even mentioning trying to stay in touch with my family and friends…

Thankfully my cousin traveled down from Milton Keynes with a pile of jackets and duvets just in time for the weather change, so at least I could wrap up against the icy seaside winter. I wasn’t quite so insulated, unfortunately, against the cost of living. The un-timely fall of the Naira just before the start of my MA meant my scholarship and savings suddenly lost a third of their value. I had to live on scones and Pepsi to cope!! While the university cafeteria made them very nice and fresh, admittedly it was a bit of a slog sometimes.

In terms of the course and my classmates, it was a mix between students freshly graduated from their first degree and people like me with plenty of practical work experience from the field, including people working for NGOs and donors, and a journalist. The professors were all active practitioners, as well as doing their own academic research. We certainly had some interesting discussions in the seminars. The less experienced members of the class were keen to hear about practical experiences from developing countries, which I and a few others willingly shared. So I both learnt a lot and shared my own learnings a lot.

Specifically, I took a course on development studies generally, and then specific courses on climate change, gender & development, governance, globalisation, and poverty Picture1programming, which mostly covered social protection and microfinance. My thesis was on donor-funded “graduation programmes”[1] and how they enable people to graduate out of poverty. For example they often include entrepreneurial support, consumption support, and other types of support. Generally these programmes last 18 months. I learnt there is no silver bullet to solving development issues, but interventions stand better chances of succeeding if they don’t discount local knowledge or the lived realities of the programme beneficiaries.

Related to that, it seems like a contradiction that the importance of local ownership and involvement in development programmes is repeated like a mantra, yet most academic departments offering these courses are actually based in the global north. Studying for this masters made me realise that much more research should be done in the global south, rather than north. Why is it that there are far more options for doing an MA in Development Studies in Europe or America than in Africa, or at least outside of Nigeria?My question is, why are universities and other stakeholders not more commonly establishing e.g. Centres for Excellence of African Research at African universities, and giving scholarships for interested people to attend these centres? Why are the “best” schools of development research all based in developed countries?

Aside from concentrating relevant knowledge in the “wrong” place, it also makes it very expensive for students from the global south to learn and participate in these studies, which I experienced in a pretty visceral way, literally!

Although saying this, I should also mention that I just learnt that Kantar Public is setting up a scholarship for evaluation learning in Nairobi soon, so that is one step towards building evaluation capacity locally.

The masters degree was well worth the effort, though. The MA in Development Studies opened my eyes to how broad the field really is, indeed it is way bigger than I thought. After a year-long course I’ve only scratched the surface of development studies. I’m glad, however, I didn’t do a master’s degree straight out of university. I think you need to be a bit more seasoned, need a bit more life experience and insight into human experiences to get the most out of an MA. I’d say that is especially true for development studies.

[1] See this page, for example for further details:

Co-operation or freeloading: What is the effect of conditional versus unconditional incentives in an SMS survey?

Written by: Alexandra Cronberg


Gifts can be a tricky business. While they may stem from pure generosity and care, they often come with sticky strings. Just ask all the companies that tightly regulate the receipt of gifts from, say, potential clients or partners. Such are human relationships that obligation and reciprocity often govern behaviour and interactions, for better or worse.

In survey research we may draw on the same deep-seated human traits of obligation and reciprocity to get respondents to complete our questionnaires. We can do this by giving an unconditional gift, i.e. incentive, in advance of asking for participation. Indeed, several studies[1] on postal surveys have shown that unconditional incentives do lead to higher response rates compared to giving a gift conditional upon completing the survey, which arguably treats the questionnaire more like a transactional exchange.

The use and administration of incentives is a particularly relevant issue for surveys making use of self-completion questionnaires, such as postal and SMS surveys: These data collection modes do not have the benefit of an interviewer who can coax respondents to take part and therefore need to rely on incentives to a greater extent.

Now, the same studies showing that unconditional incentives in postal surveys lead to higher response have also shown that unconditional incentives are actually not cost efficient. This can be due to undelivered letters or the absence of eligible respondents. Some respondents will also take the incentives, e.g. a voucher attached to an advance letter, without completing the questionnaire. Consequently, in practice there are few postal surveys that actually administer incentives unconditionally.

With the increasing popularity of SMS surveys, it is pertinent to ask whether unconditional incentives have the same effect on SMS as on postal surveys, and whether it is cost efficient or not. In particular, SMS has the advantage over postal surveys that respondents can easily opt in, meaning cost efficiency may well be improved.

In order to seek the answer to these questions, Kantar Public carried out a small experimental study together with the British Council. Read on to find out the results.

This study

The study involved an SMS survey with an experimental design to test the effect of administering conditional versus unconditional incentives. The study also sought to test the feasibility more broadly of using SMS as data collection mode to gather feedback and progress updates from British Council course participants, but that question is the topic for another blog post.

The survey was carried out among course participants in a British Council teacher training course in Ethiopia and the questionnaire comprised 16 questions. The sample consisted of 434 respondents with valid telephone numbers. Respondents were randomly allocated into one of two groups, Group A and Group B. The initial message was successfully delivered to 390 respondents (Group A: 199 resp. and Group B: 191 resp.). Each group was administered the survey as shown in the diagram below.

Group A & B

At the beginning of fieldwork, respondents were sent a message alerting them to the survey. A day later they were then sent another message asking them participate. In order to participate, respondents were instructed to first opt in by responding to the message. For Group A, the questions were then sent out followed by the incentive, provided the respondent completed all 16 questions. For Group B, the incentive was sent immediately after the respondent opted in, which was followed by the questions. The incentive consisted of airtime worth 15 Ethiopian Birr, equivalent to 0.55 US dollars.


The findings from the study suggest that offering the incentive in advance yields a slightly higher response rate compared to an incentive conditional on the respondent completing all the questions. As shown in the table below, among Group B, 25% completed all the questions whereas in Group A the equivalent figure was 21%.

These figures are broadly in line with surveys of this nature. That said, it is clear that response is still fairly low even among Group B.

How does this impact on cost efficiency? As mentioned above, one advantage of SMS surveys over postal ones is that respondents can easily opt in before any other message or incentive is sent to them. This means that unconditional incentives are only sent to respondents who have a valid telephone number and who are eligible, thus minimising loss. There is, however, still the potential issue of respondents taking the incentive without completing the questionnaire. This problem turned out to be quite a notable one in our SMS survey. Among respondents who opted in, nearly half of Group B (48%) did not complete the questionnaire. That means a large share of respondents took the incentive but ditched the questionnaire. The equivalent proportion who opted in but failed to answer all questions was somewhat higher for Group A (56%). Yet the resulting cost for the airtime incentives overall (and per completed interview) was lower for Group A since we did not allow for any freeloaders.

Putting monetary values to the incentives given to Group A and B, we can see that the total cost for Group B was ETB 15*93=ETB 1,395 (USD 59.30), equivalent to an average of ETB 29 per completed interview. This compares to a total cost of ETB 15 per completed interview among Group A, resulting in a total cost of ETB 15*42=ETB 630 (USD 26.77). Consequently, we might draw the conclusion that cost efficiency is a major concern also for SMS surveys when administering unconditional incentives.



Based on the results from this experimental SMS study among teachers in Ethiopia, we can see that unconditional incentives yielded slightly higher response compared to administering incentives conditional upon completion of the questionnaire. This finding is line with other studies, and re-affirms the view that drawing on respondents’ sense of obligation and reciprocity is more productive than treating survey participation as something of a transactional exchange.

That said, it is clear that a large share of respondents are not that bothered about reciprocity in the face of a free gift, even when first asked for their active participation. In this light, administering unconditional incentives in an SMS survey is arguably not cost efficient, with the average cost of unconditional incentives per completed interview nearly double that of the conditional alternative.

Hence, the sense of obligation and reciprocity may well be part of deep-seated human traits and behaviour, but it seems that in a context of technology and faceless interactions, many respondents will turn into freeloaders. Unfortunately for us social researchers, free airtime does not seem to come with sticky strings.


[1] See for example Simmons, E. and Wilmot, A. ‘Incentive payments on social surveys: a literature review’, published by the Office for National Statistics in the UK, 2004. See also Abdulaziz K, Brehaut J, Taljaard M, et al. ‘National survey of physicians to determine the effect of unconditional incentives on response rates of physician postal surveys’. BMJ Open 2015;5: 007166.doi:10.1136/bmjopen-2014-007166

A bit like finding a husband? Success factors for implementing segmentation analysis in social research studies

Written by: Alexandra Cronberg


Segmentation analysis is gaining popularity in social research. While it has long been used in market research, this analytical approach can also add value in social research contexts. Specifically, it can help providing an understanding of different needs and motivations among sub-groups in a target population. Consequently it can help donors and agencies tailoring their programmes and interventions and thus increasing the likelihood of success.

There is much to be said about adopting segmentation as an analytical tool into social research. Yet it is also important to recognise the differences between social and market segmentations. This helps to both apply the tool appropriately and to set the right expectations early on. In this light, the blog post here will talk about the main differences between market and social segmentations, and what to bear in mind to ensure segmentation studies are successful in social research.

Now, you might wonder where that husband comes into the picture? Well, bear with me for a moment, but you can think of each segmentation solution as a potential partner. It will all become clear.

Hands holding seeds 2

Examples of segmentation studies

At Kantar Public we have conducted a number of segmentation studies over the last couple of years, including the following projects:

  • Segmentation of the adult population in India, which is one of the countries where open defecation is a major concern. The segmentation explored and helped gain an understanding of people’s toilet acquisition behaviour, drivers, and barriers. The segments identified were Regressives, Conservatives, Prospectives, and Progressives.
  • Farmer segmentation in Tanzania and Mali to understand which African farmers are open to new behaviours. The segments identified were Contented dependents, Competent optimists, Independents, Frustrated escapists, Traditionalists, Trapped.
  • Segmentation of young women and girls at risk of HIV in Kenya and South Africa. This study aimed at understanding risk factors that increase young women’s vulnerability to HIV infection based on behavioural, attitudinal, and demographic variables. The analysis led to segments such as teenage girls just starting to explore sex and relationships; young women in traditional marriages; girls with boyfriends who always use condoms (except when they don’t); and girls with steady boyfriends and sugar daddies on the side.

These projects give a flavour of what social segmentation solutions may look like. The studies have helped our clients to better target their interventions based on the specific needs and drivers of each segment, hence illustrating the value of applying segmentation analysis in a development context.

What is meant by ‘segmentation’ and what should it look like?

Before we move on to the differences and success factors, let’s agree on what is meant by ‘segmentation’. The word segmentation is sometimes used to simply denote splitting a population into sub-categories and presenting analysis by variables such as gender or age group. While this is indeed one type of segmentation, ‘segmentation analysis’ generally refers to sophisticated statistical techniques to segment people based on carefully designed questions and topic areas, and on patterns in the data that are unknown prior to the analysis. Segmentation can be based on a wide range of factors such as socio-demographics, beliefs, attitudes, behaviour, needs, and individual emotional traits. It is this type of segmentation we are concerned with here.

The aim of segmentation analysis is to have segments that are as distinct as possible from each other, while the people within each segment should be as similar as possible. The segments should also be easily identifiable in the population from a practical point of view. Furthermore, a successful segmentation should offer insights, some ‘ah ha!’ experience, and be intuitive enough to strike a chord with the client and stakeholders. If not, the segments are unlikely to gain traction.[1]

How do market and social segmentations differ?

Moving on to the differences between market and social segmentation studies, there are two main differences which I will talk about here.

Firstly, the outcome variables – that is, the factors on which the segmentation is based on – may be less clearly defined in social segmentations than in market ones. While market segmentations generally focus on segmenting the target population on the basis of a single outcome variable and a single behaviour – purchase of a product – social segmentation studies tend to be more complex than that. Social ones often (a) look at multifaceted and socially sensitive behaviours and (b) often try to explain multiple behaviours which each is affected by a different set of drivers and barriers.

As mentioned above, one of the benefits of using segmentation analysis in development is that programmes and interventions can be tailored according to the specific needs and behaviours of the target population. The key outcome variables for a programme may indeed be dependent on the findings from the segmentation analysis. This means that outcome variables may not actually be known or clearly defined at the beginning of a project.

In the context of young women at risk of HIV, there is a multitude of behaviours that lead to increased vulnerability. Risky behaviour may stem from lack of willingness to go out of one’s way to get a condom, lack of confidence to insist on condom use, or the keeping of multiple and/or concurrent boyfriends, to mention but a few. These behaviours, in turn, may be related to opportunities and socio-economic factors. There may also be physical barriers, such as inaccessibility to places providing free condoms, or lack of money to buy them. These factors can all feed into the segments, which subsequently reflect a variety of risk factors and population profiles. The intervention could focus on any one, or more, of these risk factors and drivers.

With complex segmentation studies such as the one of young women at risk of HIV, the analysis is often an iterative exercise where solutions are scrutinised and re-scrutinised as part of the process. In fact, you could say it is a bit like finding a partner or spouse with whom you want to settle down: you might need to meet a few potential partners before you even fully realise what it is you are seeking. Now, some researchers estimated the ideal number of partners to date before settling down is as high as 12![2]

Turning the attention back to segmentation, the multitude of outcome variables and the often complex associations between behaviours, attitudes, and needs further mean that segments produced in social segmentations are unlikely to be as neat as standard market segments.

As for your potential long-term partner, no segmentation solution is perfect. It is thus a matter of deciding what the most important traits are, and focusing on those. Although we may dream of extremely well-differentiated segments, each consisting of highly homogenous groups, we are unlikely to observe such a pattern for the full range of relevant variables. For example, among our young women, social norms and touch points turned out to be less differentiating than behaviour to protect oneself against HIV and also experience of abuse.

On this note, it is worth highlighting the importance of including a sufficient number of behavioural variables in the segmentation. While behavioural variables may not necessarily be more differentiating than attitudinal ones, they tend to have more practical value for identifying the target groups in the population at large. It is therefore important to ensure a sufficient range of relevant behavioural variables are covered.

Success factors

Having talked about segmentation analysis in broad terms, and the main differences between market and social segmentations, we can summarise the learnings for successful social segmentations as follows:

  1. Define as clearly as possible the element(s) (behaviours, attitudes etc.) on which you want the segments to vary, while acknowledging the complexities in social segmentations. Identifying the right segmentation variables is critical for successful segmentations. However, lack of a single outcome variables, and multifaceted relationships between behavioural, attitudinal and demographic variables mean segmentation analysis may involve an iterative process of finding the most suitable solution. It also means that segments may not be as clearly defined as standard market segments.
  2. Make sure the segments are easily identifiable in the population and, if necessary, tilt the balance towards behavioural factors. As for any segmentation, whether in market or social research, it is important that segments are identifiable in the population at large. How will the target groups be reached in practice? Behavioural variables tend to be more useful for this purpose, but this is dependent on the nature of the intervention.
  3. Allow time and resources to find the optimal segmentation solution. Two or three iterations are unlikely to be enough, so it is important to allow sufficient time for analysis. Finding the right segmentation solution is indeed a bit like finding a spouse. None is perfect, and it is only after meeting a few potential partners that one better knows what to settle for.
  4. Align expectations early on since the resulting segments are unlikely to be as neat as standard market segments. In light of the points above, it is important to acknowledge the differences between market and social segmentations, and the expected outputs. Have, and set, the right expectations from the start and segmentation solution will invariably become a smoother exercise.

Social segmentations have immense potential to add value and insight to programme designs, in particular to better understand the needs and drivers across different sub-groups in the target population. Bear in mind the points above, and you will maximise the chances of finding a set of segments that will succeed in making you happy. Perhaps not forever after, but at least until your next programme.


[1] I won’t go into the technical details of segmentation here, but it is worth noting that there are several different statistical methods of conducting segmentation analysis. One common analytical approach is Latent Class Analysis (LCA), which for example was used for the HIV related-project. The segmentation analysis is typically used to produces outputs for several different segmentation solutions such as solutions for 3, 4, 5, 6 and 7 segments. When deciding which solution to use, we normally look at the segments based on the segmenting variables and also by cross-tabulating the segments against other variables in the questionnaire. Pen portraits can then be produced of the different segments and to help decide which solution is the most useful ones.


Functional Literacy: A Better Way of Assessing Reading Ability?

Written by: Alexandra Cronberg

When I lived in Nigeria, my driver, a young man in his 20s, told me had gone to school for six years. Yet he struggled to read and write. Once when taking me to the airport, he almost missed the turn for ‘Departures’. I realised he couldn’t read the sign. Other times he sent me text messages containing scrambled letters and words that I deciphered with a smile and a bit of sadness. I later learnt that he was going to school again to improve his literacy. The thing is, he was also a boxer who competed internationally. He said it was difficult for him to travel without being able to read. That ‘Departures’ sign was indeed important for his own life too.

Literacy is clearly key to getting on in life, whether you are well off and taking it for granted, or disadvantaged and struggling to read. Without the ability to read and write, you might miss out on opportunities to learn, adopt new practices, or indeed get by in everyday life. For organisations and governments working to improve the situation for poorer people in Africa and Asia in particular, it is essential to know what the level of literacy is and what the gaps are. As illustrated by my driver, the level of schooling is often not a good measure. Literacy needs to be measured specifically.

There are several ways in which this can be done. Literacy measures at population level normally involve a quantitative household survey[1]. The degree of usefulness and resource intensity of the measures varies, however. Data are usually collected face-to-face, though the more simplistic measures can be applied in other modes as well. Here I will briefly discuss the pros and cons of the main approaches, and also highlight the method of ‘functional literacy’ which has been developed and implemented by IBOPE Inteligência, associated with Kantar Public in Brazil, Instituto Paulo Montenegro, the social arm from IBOPE, and Ação Educativa, a non-governmental organisation focused on education in Brazil.

African children during English class, East Africa

African children from Samburu tribe during English language class under the acacia tree in remote village, Kenya, East Africa. Samburu tribe is one of the biggest tribes of north-central Kenya, and they are related to the Maasai.

In this blog post I will focus on ways of measuring reading ability, but similar approaches can be applied for writing ability and basic numeracy. Moving on, then, to the main approaches:

  1. Asking about reading ability directly. For example “How well can you read?” or “How well can you read a newspaper?” Response options may be “Very well”, “Somewhat well”, and “Not at all”.

Clearly this approach relies entirely on respondents’ subjective opinion of how well they can read, and may also be subject to social desirability bias. It may be influenced by reading ability among people around them, and their own rose-tinted self-perception. Perhaps a respondent can easily read her brother’s text message – better than anyone else in the household – but she might struggle to read more complicated texts. She would like to say she can read very well. What will she respond?

Having said that, there are times when self-perceived ability is what matters, for example where one wishes people to put themselves forward for adult education. Another advantage of this otherwise quite limited approach, is that it is a very short question that can fit into even SMS questionnaires. Moreover, the version of the question that simply asks how well respondents can read avoids the issue of defining the language. While this may be a drawback if more in-depth information is required, the question can serve to give a general sense of literacy level.

Asking specifically about newspaper reading means a reference point-of-sorts is introduced. However, it also raises the issue of language. What if most newspapers are published in, say, English rather than local languages? Which language should the question refer to?

Finally, it is worth mentioning that the literacy questions above are sometimes asked with respect to other people in the household rather than the respondent. This avoids potential social desirability bias, but it means links with other factors cannot be analysed so straight-forwardly.

  1. Asking the respondent to read a sentence out loud, eg ‘Parents love their children’ (from the Demographics and Health Survey, as referenced in the 2006 UNESCO paper).

This approach moves closer to assessing actual ability in an objective manner, rather than relying on self-reported answers. Responses are normally coded along the lines of ability to read ‘full sentence’, ‘partial sentence’ or ‘not at all’. While this approach is generally an improvement from self-reported measures, the sentence is usually a very simple one and provides a rather crude tool for assessment. Also, responses may not reflect actual comprehension. Few respondents succeed in reading only ‘part of the sentence’ – usually they can either read all of it or nothing, meaning it is not a very nuanced measure even for what it is trying to assess.

  1. Giving the respondent a brief text to read and then assess their comprehension.

Giving respondents a brief text to read and then asking questions to assess their comprehension provides a better assessment of literacy than just asking them to read a sentence out loud. The example below is taken from an Education Impact Evaluation survey in Ghana (2003), again as referenced in the UNESCO paper.

“John is a small boy. He lives in a village with his brothers and sisters. He goes to school every week. In his school there are five teachers. John is learning to read at school. He likes to read very much. His father is a teacher, and his parents want him to become a school teacher too.”

The respondent is then asked questions such as ‘Who is John?’, ‘Where does John live?’, ‘What does John do every week?’ etc. Often the responses are provided in multiple choice format.

Responses are grouped into categories based on the number of correct answers. This approach provides more reliable and nuanced results than the measures above, but it arguably doesn’t capture an adequate range of literacy levels reflecting how well people can function in the real world.

  1. Functional literacy: Giving the respondent a test to assess literacy based on a series of everyday-related activities.

This approach takes the literacy assessment a step further by incorporating a number of different tasks, reflecting everyday life in the context of a given society. It thus provides a much richer measure of literacy. It specifically measures ‘functional literacy’. The test has been developed in Brazil and covers things like reading a magazine, instruction manuals, and health related information. The test contains about 20 questions. For example, respondents are asked to look at a magazine and indicate where on the cover the title is located, or link the headings on the cover with the relevant articles. Other test questions relate to instructions on how to clean a water tank, information on who is eligible for vaccinations, and information on how to pay for a TV in installments. The level of difficulty increases as the test progresses. The responses are then coded using the method of Item Response Theory, meaning the increasing level of difficulty is taken into account in the weighting of responses. Respondents are categorised into one of four groups reflecting the level of functional literacy: 1) Illiterate, 2) Rudimentary, 3) Basic, and 4) Fully literate.

As mentioned above, this approach has been developed by our Kantar Public team in Brazil in partnership with Instituto Paulo Montenegro and Ação Educativa. It now provides official literacy statistics over time for the country. In principle, the assessment can be incorporated into any questionnaire and could be adopted for other countries. The downside, however, is that it can take a bit of time. While a person who can read well would only need about 15 minutes to complete the task, it often takes much longer for someone with lower level of literacy, not least because respondents often do not wish to give up. The other thing is that, as far as I am aware, it has so far only been developed for the Brazilian context. It would be extremely useful to adopt it to other languages and societies too, which indeed I hope we will get a chance to do.

On that note, I will end this blog post. Hopefully the continued measurement and development of global literacy indicators will help direct resources to improve people’s literacy among those who need it the most. The adoption of functional literacy in other countries would be a step in the right direction.

Hopefully better measures and improved literacy will contribute to a future where no one is held back because they struggle to locate the ‘Departures’ sign, and people like my Nigerian driver can take off in their boxing careers, or in any other ambition or aspiration they may have.

[1] For a comprehensive discussion of the first three approaches described in this blog post, see the UNESCO paper ‘Measuring literacy in developing country household surveys: issues and evidence’ (2006), available at:

From snoring camels to product diversification: A gendered analysis of internet participation in Ghana, Kenya, Nigeria and South Africa

Written by: Alexandra Cronberg

It is hard to find anything that offers so much hope and potential as increased internet access across Africa. The internet offers a whole new world of information, ideas, tools, and ways of connecting people as well as providing sources of entertainment and distractions, certainly with silly kittens and camels galore. Importantly, it offers revolutionising ways of accessing and delivering services, including vital ones such as finance. Recent discussions with jua kali, or informal sector producers in Kenya, showed enormous potential to diversify their product lines provided they had access to and knowledge of the internet. Enabling people at the bottom of the pyramid, who currently have little or limited internet access, to make use of all of this will be life changing.

Or so we like to think. In reality, the picture is more complex. While internet access itself may be binary, just like the data it holds, the users are intricate, inconsistent and often contradictory human beings. Indeed, internet participation cannot be reduced to zeros and ones. A paper by Kantar Public, presented at the African ITS Conference in Accra in March 2016, sheds light on the complexity of internet engagement and the factors that underpin it. The paper, authored by Nicola Marsh, is based on analysis of a global annual study of internet use conducted by Kantar TNS in a wide range of countries[1]. This particular piece of analysis focuses on Ghana, Kenya, Nigeria and South Africa.

Gender is a key part of this picture. Fewer women than men use the internet in most African countries, and these four countries are no exception. By way of example, 19% of men in Ghana have access to the internet, whereas the figure for women is a measly 9%. In South Africa, which has the highest level of internet access among the four countries, 41% of men use the internet whereas only 29% of women do so[2]. Consequently the door to the digital world remains shut for many women.

Figure 1. Internet access by country and sex, 2012


Source: Research In Africa, 2012

The KP paper analysed different levels of internet engagement and factors that underpin different types of usage. First, an overall “internet participation” composite score was created based on a bunch of common online activities and their frequency. The findings show that greater access for women, or indeed disadvantaged men, does not imply online engagement. In fact, the countries with higher levels of access tend to have lower levels of participation. Within the countries, men consistently have higher levels of engagement than women.

Figure 2. Mean score of internet participation by country and sex, 2015


Source: Kantar TNS Connected Life Survey, 2015

Second, this overall score was then broken down into three main factors or categories of usage, capturing some of the nuance of internet engagement. The categories are:

  • Popular activities. This includes instant messaging, social networking, uploading photos, playing games, reading news/sports/weather.
  • Sophisticated activities. This includes mobile payments[3], streaming/downloading shows/movies, streaming music/radio, watching videos, internet banking
  • Text heavy activities. This includes blogging, visiting blogs/forums, and emails.

The gender gap is further highlighted when looking at these different categories of internet usage, with sophisticated activities having the greatest gap.

Other factors in addition to gender that lead to greater internet participation overall are younger age, better education, and higher socio-economic group. However, different life stages, defined as student status, marital status and having kids, have no consistent impact on online participation across the four countries.

Lower education and social class have less of an impact on the popular online activities. If we want to get women and people who are less well educated to participate more, the starting point should arguably therefore be data light services.

These findings show that as online participation increases and people lower down the pyramid gain access, proportionately more people engage with the internet in lighter ways. Women are often among those who are late to join the online party. Indeed, across the four countries the gender gap for internet participation is inversely related to the level of internet access.  For example, in South Africa a more similar proportion of men and women access the internet, but among those who are online, women have a lower level of participation than men.  In contrast, in Ghana where the gender gap in access is large, the men and women who do have access have more similar levels of engagement.

In sum, this analysis makes it clear that for the internet to be a truly useful tool for disadvantaged groups of people, much more ought to be done to get women in particular to develop more technical skills and online literacy, as well as solving other affordability and access issues. If not, many of the most vulnerable people will remain excluded from the digital possibilities including access to services, information, networks and ideas. While a few tentative steps online might mean people tumble into Facebook and other social networks, it is essential they don’t just get sucked into the whirlpool of singing dogs, snoring camels and other people’s dinner from which they may or may not emerge. Rather, people need to engage with more sophisticated online activities if they are to click their way onwards and upwards. A snoring camel ain’t gonna help with that.

The full version of the paper is available on request.

[1] The analysis was based on the data from the annual, multi-country survey conducted by Kantar TNS, called “Connected Life”. The survey covers technology and internet behaviours amongst internet users. All those interviewed use the internet at least once a week, and the sample for each country is weighted to be nationally representative of weekly internet users aged 16+. The data was collected between June and August 2015.

[2] Source: Research In Africa, Gillwald et al (2012),

[3] Note that in Kenya mobile payments are commonly done using Mpesa, but the level of penetration of mobile money is much lower in other countries.

Focus group discussion or individual interview? The reality of quantitative interviewing in developing countries

Written by: Alexandra Cronberg

Do you ever find yourself trying to hold a conversation with someone in a noisy, busy environment? Perhaps it’s even in your house. Perhaps there are kids running around, teenagers watching TV, and your relatives have come to stay. It can get crowded. Then there’s a knock on the door. Indeed, someone else – an interviewer – has come to ask for a little of your time. You happily oblige, but there may not be a quiet corner for the interview, and others will inevitably over-hear what you are saying.

This is the reality for many of our respondents, and a common challenge faced by our enumerators. Our populations tend to have large families. Space is often scarce, with one-room houses being commonplace in urban areas. Households in rural areas might have more space, inside or outside, though this space seems to quickly fill up with curious onlookers.

The interview environment is thus not always ideal. This raises the questions: What proportion of interviews is indeed affected by noise and bystanders, and what is the impact of less than ideal interview settings? Does it matter? To what extent does it affect the quality of the data we collect? If so, what are the key concerns?

Our colleague at RTI, Charles Q. Lau, in collaboration with Melissa Baker, CEO of Kantar Public Africa & Middle East, conducted an analysis to answer these questions together with a few other co-authors. The article was published in the International Journal of Social Research Methodology (2016)[1]. Read on for a summary of the findings.

The Results

The findings are based on 15,309 face-to-face in-home interviews representative of the adult populations of five countries in Africa and Latin America (Ghana, Nigeria, Uganda, Brazil, and Guatemala), conducted in 2014 and 2015. The study answered the questions below.

How common are bystanders and noise in the interview context?

Well, it varies. Interviewers do their best to conduct interviews in a private place, out of hearing of others. However, the household context in these countries means this is often not possible. In terms of bystanders, ‘completely private’ interviews were conducted in only 64% of interviews in Brazil, 59% in Ghana, 54% in Guatemala, 53% in Uganda, and 33% in Nigeria. Bystanders are mostly non-family and extended family members, such as neighbours, domestic staff, but also children. In contrast, it appears most spouses have better things to do than listen in to their husband’s or wife’s survey responses.

Most interviews across all countries take place in a ‘quiet and calm’ setting. Even so, children, televisions, telephones and other distractions affect a few of the interviews: between 19% (Brazil) and 45% (Guatemala) were done in more or less noisy surroundings (either a bit of noise, or very noisy and chaotic).etaknrwhbcs-daniel-roizer

So the one million dollar question is: Do bystanders affect responses to questions?

The good news is that bystander presence has little effect on responses to non-sensitive questions. The analysis found there is little association between presence of onlookers and response distributions about technology-related questions, ‘don’t know’ responses, and survey satisficing (that is, the tendency to answer questions to minimise effort rather than respond in a truthful manner). So, in terms of non-sensitive topics we (and you!) can rest assured that standard interview settings in these countries do just fine for gathering good quality data.

Bear in mind however that this survey covered the topic of technology, which is by and large a non-sensitive topic. Other studies have shown that bystanders do have an effect on responses to sensitive questions, such as domestic violence and drug and alcohol use. For surveys asking sensitive questions, this study highlights the need to carefully consider the interviewing context, given how common it is that respondents are surrounded by bystanders and noise.

Could bystanders actually help to improve data quality for factual questions?

Well, yes, but only if the bystander is the husband or wife. However, most curious onlookers are neighbours, children, or extended family rather than the spouse. So the overall impact on data quality is negligible. Indeed, only 3-4% of interviews in Ghana, Nigeria, Uganda and Guatemala had the spouse present. In Brazil it was 11%. Having said that, among the few spouses present, some of them do chip in with factual information. This was especially the case in Nigeria, where almost half of spouse-bystanders assisted the respondent.

How does the interview environment affect data quality?

Perhaps unsurprisingly, noise has a negative impact on interviewer-respondent interactions. Noisier and more chaotic surroundings are generally associated with lower levels of respondent cooperation, attention and friendliness. However, in terms of the proportion of interviews in our study that were disrupted by chaos and noise, this figure was low: in Brazil, Ghana and Uganda only 2-5% of interviews were conducted in a very noisy and chaotic environment. The equivalent figures for Nigeria and Guatemala were a bit higher, ranging between 11 and 15%.

Having said that, again the good news is that noise and distractions had little effect on data quality itself. Indeed, interviewers seem to know how to cut through the noise! Key quality measures – level of ‘don’t knows’, satisficing, and response distributions – were not significantly associated with interviewing environment. We can therefore be confident that the data we collect is of high quality, indeed reflecting respondents’ attitudes and behaviour rather than the environment.

On that note, I will end this communication and say thank you for reading. That is, assuming you weren’t already distracted halfway through…

[1] Charles Q. Lau, Melissa Baker, Andrew Fiore, Diana Greene, Min Lieskovsky, Kim Matu & Emilia Peytcheva (2016): Bystanders, noise, and distractions in face-to-face surveys in Africa and Latin America, International Journal of Social Research Methodology.