Mobile technology: The future of evidence in development?

Written by: Alexandra Cronberg

The future is all about mobile technology, right? Well, perhaps, but in the context of real programme evaluations, it is worth examining and understanding the benefits and drawbacks of mobile data collection modes for gathering evidence, before waving goodbye to human interviewers.IMG-20180321-WA0002

In order to address this topic, Kantar Public and the British Council hosted a joint event which took place in London, covering three studies including one in partnership with RTI. The full slideset with the findings is available here. Read on for a brief summary.

  1. How efficient is mobile SMS vs other methods of collecting evidence from teachers in the Connecting Classroom programme?

In order to answer this question, we carried out a pilot survey among participants who had attended the British Council’s Connecting Classroom training in Ethiopia. The study collected progress data using an SMS survey, rather than using the alternatives of paper or telephone. This study showed that SMS has many benefits and some challenges: It is a cost-efficient and viable option for collecting progress data among a target population among known participants, although the response rate is lower compared to telephone and paper. Moreover, there were some unexpected challenges during the pilot, including internet downtime and a change in the MNO’s airtime bundles, which affected the administration of incentives. This highlighted the importance of allowing plenty of time for testing and piloting the survey.

The second study addressed the following question:

  1. What is the potential for using Interactive Voice Response (IVR), compared with SMS, telephone surveys (CATI) and face-to-face surveys of collecting information in the general population?

This study, which Kantar Public conducted in partnership with RTI, compared response rates and representativeness of mobile data collection modes (i.e. SMS, CATI, and IVR) with that of face-to-face interviews. All of these studies targeted the adult general population in Nigeria aged 18 to 64. The results showed that the response rate of SMS and IVR are very, very low, and even for CATI it is much lower than for face-to-face. This would not be such a problem if the achieved samples were representative of the general population. That is, unfortunately, not nearly the case. The study showed that the achieved samples using SMS and IVR are very much skewed towards better educated and younger people, and also towards men. People often think that applying statistical weights to improve sample representativeness is the solution to this problem. However, the findings showed that weighting does not solve the problem, and least when looking at voting behaviour. In fact, weighing actually increased the bias. Finally, with respect to cost, once we adjusted for questionnaire length and sample size, SMS and IVR are more expensive than CATI on a question-by-question basis.

The third study addressed the this question:

  1. At the classroom learning outcome level, what is the role that mobile play? Can mobile improve the immediacy of outcome data collection?

This part of the presentation related to a pilot study that the British Council carried out to test how new technologies – in this case a mobile phone app – can be used to the assess core skills at a classroom level. This app enabled assessment at the “point of learning” by teachers, peers, and by students themselves through self-reflection. It proved to be a useful, easy-to-use tool with scope for further roll-out.

In sum, these studies showed that SMS and IVR have some potential for use in survey data collection, but that representativeness is a serious concern when these data collection modes are targeting the general population. Perhaps the future isn’t quite yet what we think it is.